00:00:00.001 Started by upstream project "autotest-per-patch" build number 132551 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.015 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.017 The recommended git tool is: git 00:00:00.018 using credential 00000000-0000-0000-0000-000000000002 00:00:00.021 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.037 Fetching changes from the remote Git repository 00:00:00.040 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.062 Using shallow fetch with depth 1 00:00:00.062 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.062 > git --version # timeout=10 00:00:00.086 > git --version # 'git version 2.39.2' 00:00:00.086 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.117 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.117 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.651 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.664 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.675 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.675 > git config core.sparsecheckout # timeout=10 00:00:02.688 > git read-tree -mu HEAD # timeout=10 00:00:02.703 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.731 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.731 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.841 [Pipeline] Start of Pipeline 00:00:02.856 [Pipeline] library 00:00:02.857 Loading library shm_lib@master 00:00:02.858 Library shm_lib@master is cached. Copying from home. 00:00:02.875 [Pipeline] node 00:00:02.881 Running on VM-host-SM38 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.886 [Pipeline] { 00:00:02.897 [Pipeline] catchError 00:00:02.899 [Pipeline] { 00:00:02.909 [Pipeline] wrap 00:00:02.917 [Pipeline] { 00:00:02.926 [Pipeline] stage 00:00:02.927 [Pipeline] { (Prologue) 00:00:02.945 [Pipeline] echo 00:00:02.947 Node: VM-host-SM38 00:00:02.953 [Pipeline] cleanWs 00:00:02.962 [WS-CLEANUP] Deleting project workspace... 00:00:02.962 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.967 [WS-CLEANUP] done 00:00:03.151 [Pipeline] setCustomBuildProperty 00:00:03.234 [Pipeline] httpRequest 00:00:03.546 [Pipeline] echo 00:00:03.547 Sorcerer 10.211.164.20 is alive 00:00:03.558 [Pipeline] retry 00:00:03.561 [Pipeline] { 00:00:03.575 [Pipeline] httpRequest 00:00:03.578 HttpMethod: GET 00:00:03.579 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.579 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.580 Response Code: HTTP/1.1 200 OK 00:00:03.581 Success: Status code 200 is in the accepted range: 200,404 00:00:03.581 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.726 [Pipeline] } 00:00:03.744 [Pipeline] // retry 00:00:03.751 [Pipeline] sh 00:00:04.032 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.047 [Pipeline] httpRequest 00:00:04.391 [Pipeline] echo 00:00:04.393 Sorcerer 10.211.164.20 is alive 00:00:04.402 [Pipeline] retry 00:00:04.404 [Pipeline] { 00:00:04.418 [Pipeline] httpRequest 00:00:04.422 HttpMethod: GET 00:00:04.423 URL: http://10.211.164.20/packages/spdk_e43b3b914a2f081051aba39c73d952a3fadefbbe.tar.gz 00:00:04.423 Sending request to url: http://10.211.164.20/packages/spdk_e43b3b914a2f081051aba39c73d952a3fadefbbe.tar.gz 00:00:04.429 Response Code: HTTP/1.1 200 OK 00:00:04.430 Success: Status code 200 is in the accepted range: 200,404 00:00:04.430 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_e43b3b914a2f081051aba39c73d952a3fadefbbe.tar.gz 00:00:16.051 [Pipeline] } 00:00:16.068 [Pipeline] // retry 00:00:16.075 [Pipeline] sh 00:00:16.346 + tar --no-same-owner -xf spdk_e43b3b914a2f081051aba39c73d952a3fadefbbe.tar.gz 00:00:18.880 [Pipeline] sh 00:00:19.154 + git -C spdk log --oneline -n5 00:00:19.154 e43b3b914 bdev: Clean up duplicated asserts in bdev_io_pull_data() 00:00:19.154 752c08b51 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:00:19.154 22fe262e0 bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:00:19.154 3c6c4e019 bdev: Factor out checking bounce buffer necessity into helper function 00:00:19.154 0836dccda bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:00:19.172 [Pipeline] writeFile 00:00:19.187 [Pipeline] sh 00:00:19.467 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:19.479 [Pipeline] sh 00:00:19.757 + cat autorun-spdk.conf 00:00:19.757 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:19.757 SPDK_RUN_ASAN=1 00:00:19.757 SPDK_RUN_UBSAN=1 00:00:19.757 SPDK_TEST_RAID=1 00:00:19.757 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:19.764 RUN_NIGHTLY=0 00:00:19.766 [Pipeline] } 00:00:19.781 [Pipeline] // stage 00:00:19.798 [Pipeline] stage 00:00:19.800 [Pipeline] { (Run VM) 00:00:19.813 [Pipeline] sh 00:00:20.091 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:20.091 + echo 'Start stage prepare_nvme.sh' 00:00:20.091 Start stage prepare_nvme.sh 00:00:20.091 + [[ -n 9 ]] 00:00:20.091 + disk_prefix=ex9 00:00:20.091 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:20.091 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:20.091 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:20.091 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:20.091 ++ SPDK_RUN_ASAN=1 00:00:20.091 ++ SPDK_RUN_UBSAN=1 00:00:20.091 ++ SPDK_TEST_RAID=1 00:00:20.091 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:20.091 ++ RUN_NIGHTLY=0 00:00:20.091 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:20.091 + nvme_files=() 00:00:20.091 + declare -A nvme_files 00:00:20.091 + backend_dir=/var/lib/libvirt/images/backends 00:00:20.091 + nvme_files['nvme.img']=5G 00:00:20.091 + nvme_files['nvme-cmb.img']=5G 00:00:20.091 + nvme_files['nvme-multi0.img']=4G 00:00:20.091 + nvme_files['nvme-multi1.img']=4G 00:00:20.091 + nvme_files['nvme-multi2.img']=4G 00:00:20.091 + nvme_files['nvme-openstack.img']=8G 00:00:20.091 + nvme_files['nvme-zns.img']=5G 00:00:20.091 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:20.091 + (( SPDK_TEST_FTL == 1 )) 00:00:20.091 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:20.091 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:20.091 + for nvme in "${!nvme_files[@]}" 00:00:20.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:00:20.091 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:20.091 + for nvme in "${!nvme_files[@]}" 00:00:20.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:00:20.091 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:20.091 + for nvme in "${!nvme_files[@]}" 00:00:20.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:00:20.091 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:20.091 + for nvme in "${!nvme_files[@]}" 00:00:20.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:00:20.091 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:20.091 + for nvme in "${!nvme_files[@]}" 00:00:20.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:00:20.091 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:20.091 + for nvme in "${!nvme_files[@]}" 00:00:20.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:00:20.350 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:20.350 + for nvme in "${!nvme_files[@]}" 00:00:20.350 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:00:20.350 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:20.350 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:00:20.350 + echo 'End stage prepare_nvme.sh' 00:00:20.350 End stage prepare_nvme.sh 00:00:20.363 [Pipeline] sh 00:00:20.646 + DISTRO=fedora39 00:00:20.646 + CPUS=10 00:00:20.646 + RAM=12288 00:00:20.646 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:20.646 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme.img -b /var/lib/libvirt/images/backends/ex9-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img -H -a -v -f fedora39 00:00:20.646 00:00:20.646 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:20.647 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:20.647 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:20.647 HELP=0 00:00:20.647 DRY_RUN=0 00:00:20.647 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme.img,/var/lib/libvirt/images/backends/ex9-nvme-multi0.img, 00:00:20.647 NVME_DISKS_TYPE=nvme,nvme, 00:00:20.647 NVME_AUTO_CREATE=0 00:00:20.647 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img, 00:00:20.647 NVME_CMB=,, 00:00:20.647 NVME_PMR=,, 00:00:20.647 NVME_ZNS=,, 00:00:20.647 NVME_MS=,, 00:00:20.647 NVME_FDP=,, 00:00:20.647 SPDK_VAGRANT_DISTRO=fedora39 00:00:20.647 SPDK_VAGRANT_VMCPU=10 00:00:20.647 SPDK_VAGRANT_VMRAM=12288 00:00:20.647 SPDK_VAGRANT_PROVIDER=libvirt 00:00:20.647 SPDK_VAGRANT_HTTP_PROXY= 00:00:20.647 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:20.647 SPDK_OPENSTACK_NETWORK=0 00:00:20.647 VAGRANT_PACKAGE_BOX=0 00:00:20.647 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:20.647 FORCE_DISTRO=true 00:00:20.647 VAGRANT_BOX_VERSION= 00:00:20.647 EXTRA_VAGRANTFILES= 00:00:20.647 NIC_MODEL=e1000 00:00:20.647 00:00:20.647 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:20.647 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:23.175 Bringing machine 'default' up with 'libvirt' provider... 00:00:23.739 ==> default: Creating image (snapshot of base box volume). 00:00:24.349 ==> default: Creating domain with the following settings... 00:00:24.349 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732650074_a9201907ba1745dfba06 00:00:24.349 ==> default: -- Domain type: kvm 00:00:24.349 ==> default: -- Cpus: 10 00:00:24.349 ==> default: -- Feature: acpi 00:00:24.349 ==> default: -- Feature: apic 00:00:24.349 ==> default: -- Feature: pae 00:00:24.349 ==> default: -- Memory: 12288M 00:00:24.349 ==> default: -- Memory Backing: hugepages: 00:00:24.349 ==> default: -- Management MAC: 00:00:24.349 ==> default: -- Loader: 00:00:24.349 ==> default: -- Nvram: 00:00:24.349 ==> default: -- Base box: spdk/fedora39 00:00:24.349 ==> default: -- Storage pool: default 00:00:24.349 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732650074_a9201907ba1745dfba06.img (20G) 00:00:24.349 ==> default: -- Volume Cache: default 00:00:24.349 ==> default: -- Kernel: 00:00:24.349 ==> default: -- Initrd: 00:00:24.349 ==> default: -- Graphics Type: vnc 00:00:24.349 ==> default: -- Graphics Port: -1 00:00:24.349 ==> default: -- Graphics IP: 127.0.0.1 00:00:24.349 ==> default: -- Graphics Password: Not defined 00:00:24.349 ==> default: -- Video Type: cirrus 00:00:24.349 ==> default: -- Video VRAM: 9216 00:00:24.349 ==> default: -- Sound Type: 00:00:24.349 ==> default: -- Keymap: en-us 00:00:24.349 ==> default: -- TPM Path: 00:00:24.349 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:24.349 ==> default: -- Command line args: 00:00:24.349 ==> default: -> value=-device, 00:00:24.349 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:24.349 ==> default: -> value=-drive, 00:00:24.349 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-0-drive0, 00:00:24.349 ==> default: -> value=-device, 00:00:24.349 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:24.349 ==> default: -> value=-device, 00:00:24.349 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:24.349 ==> default: -> value=-drive, 00:00:24.349 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:24.349 ==> default: -> value=-device, 00:00:24.349 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:24.349 ==> default: -> value=-drive, 00:00:24.349 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:24.349 ==> default: -> value=-device, 00:00:24.349 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:24.349 ==> default: -> value=-drive, 00:00:24.349 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:24.349 ==> default: -> value=-device, 00:00:24.349 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:24.610 ==> default: Creating shared folders metadata... 00:00:24.610 ==> default: Starting domain. 00:00:25.983 ==> default: Waiting for domain to get an IP address... 00:00:47.946 ==> default: Waiting for SSH to become available... 00:00:47.946 ==> default: Configuring and enabling network interfaces... 00:00:49.315 default: SSH address: 192.168.121.182:22 00:00:49.315 default: SSH username: vagrant 00:00:49.315 default: SSH auth method: private key 00:00:51.218 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:00:59.326 ==> default: Mounting SSHFS shared folder... 00:01:00.258 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:00.258 ==> default: Checking Mount.. 00:01:01.636 ==> default: Folder Successfully Mounted! 00:01:01.636 00:01:01.636 SUCCESS! 00:01:01.636 00:01:01.636 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:01.636 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:01.636 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:01.636 00:01:01.645 [Pipeline] } 00:01:01.662 [Pipeline] // stage 00:01:01.673 [Pipeline] dir 00:01:01.674 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:01.676 [Pipeline] { 00:01:01.692 [Pipeline] catchError 00:01:01.694 [Pipeline] { 00:01:01.709 [Pipeline] sh 00:01:01.989 + vagrant ssh-config --host vagrant 00:01:01.989 + sed -ne '/^Host/,$p' 00:01:01.989 + tee ssh_conf 00:01:04.528 Host vagrant 00:01:04.528 HostName 192.168.121.182 00:01:04.528 User vagrant 00:01:04.528 Port 22 00:01:04.528 UserKnownHostsFile /dev/null 00:01:04.528 StrictHostKeyChecking no 00:01:04.528 PasswordAuthentication no 00:01:04.528 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:04.528 IdentitiesOnly yes 00:01:04.528 LogLevel FATAL 00:01:04.528 ForwardAgent yes 00:01:04.528 ForwardX11 yes 00:01:04.528 00:01:04.541 [Pipeline] withEnv 00:01:04.543 [Pipeline] { 00:01:04.555 [Pipeline] sh 00:01:04.828 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:04.828 source /etc/os-release 00:01:04.828 [[ -e /image.version ]] && img=$(< /image.version) 00:01:04.828 # Minimal, systemd-like check. 00:01:04.828 if [[ -e /.dockerenv ]]; then 00:01:04.828 # Clear garbage from the node'\''s name: 00:01:04.828 # agt-er_autotest_547-896 -> autotest_547-896 00:01:04.828 # $HOSTNAME is the actual container id 00:01:04.828 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:04.828 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:04.828 # We can assume this is a mount from a host where container is running, 00:01:04.828 # so fetch its hostname to easily identify the target swarm worker. 00:01:04.828 container="$(< /etc/hostname) ($agent)" 00:01:04.828 else 00:01:04.828 # Fallback 00:01:04.828 container=$agent 00:01:04.828 fi 00:01:04.828 fi 00:01:04.828 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:04.828 ' 00:01:05.095 [Pipeline] } 00:01:05.111 [Pipeline] // withEnv 00:01:05.119 [Pipeline] setCustomBuildProperty 00:01:05.133 [Pipeline] stage 00:01:05.135 [Pipeline] { (Tests) 00:01:05.153 [Pipeline] sh 00:01:05.433 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:05.704 [Pipeline] sh 00:01:05.982 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:06.255 [Pipeline] timeout 00:01:06.255 Timeout set to expire in 1 hr 30 min 00:01:06.257 [Pipeline] { 00:01:06.271 [Pipeline] sh 00:01:06.547 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:07.113 HEAD is now at e43b3b914 bdev: Clean up duplicated asserts in bdev_io_pull_data() 00:01:07.125 [Pipeline] sh 00:01:07.401 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:07.672 [Pipeline] sh 00:01:07.950 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:08.223 [Pipeline] sh 00:01:08.499 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo' 00:01:08.756 ++ readlink -f spdk_repo 00:01:08.756 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:08.756 + [[ -n /home/vagrant/spdk_repo ]] 00:01:08.756 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:08.756 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:08.756 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:08.756 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:08.756 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:08.756 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:08.756 + cd /home/vagrant/spdk_repo 00:01:08.756 + source /etc/os-release 00:01:08.756 ++ NAME='Fedora Linux' 00:01:08.756 ++ VERSION='39 (Cloud Edition)' 00:01:08.756 ++ ID=fedora 00:01:08.756 ++ VERSION_ID=39 00:01:08.756 ++ VERSION_CODENAME= 00:01:08.756 ++ PLATFORM_ID=platform:f39 00:01:08.756 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:08.756 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:08.756 ++ LOGO=fedora-logo-icon 00:01:08.756 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:08.756 ++ HOME_URL=https://fedoraproject.org/ 00:01:08.756 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:08.756 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:08.756 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:08.756 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:08.756 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:08.756 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:08.756 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:08.756 ++ SUPPORT_END=2024-11-12 00:01:08.756 ++ VARIANT='Cloud Edition' 00:01:08.756 ++ VARIANT_ID=cloud 00:01:08.756 + uname -a 00:01:08.756 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:08.756 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:09.014 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:09.014 Hugepages 00:01:09.014 node hugesize free / total 00:01:09.014 node0 1048576kB 0 / 0 00:01:09.014 node0 2048kB 0 / 0 00:01:09.014 00:01:09.014 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:09.014 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:09.014 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:09.014 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:09.014 + rm -f /tmp/spdk-ld-path 00:01:09.014 + source autorun-spdk.conf 00:01:09.014 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.014 ++ SPDK_RUN_ASAN=1 00:01:09.015 ++ SPDK_RUN_UBSAN=1 00:01:09.015 ++ SPDK_TEST_RAID=1 00:01:09.015 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.015 ++ RUN_NIGHTLY=0 00:01:09.015 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:09.015 + [[ -n '' ]] 00:01:09.015 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:09.303 + for M in /var/spdk/build-*-manifest.txt 00:01:09.303 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:09.303 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:09.303 + for M in /var/spdk/build-*-manifest.txt 00:01:09.303 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:09.303 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:09.303 + for M in /var/spdk/build-*-manifest.txt 00:01:09.303 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:09.303 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:09.303 ++ uname 00:01:09.303 + [[ Linux == \L\i\n\u\x ]] 00:01:09.303 + sudo dmesg -T 00:01:09.303 + sudo dmesg --clear 00:01:09.303 + dmesg_pid=4985 00:01:09.303 + [[ Fedora Linux == FreeBSD ]] 00:01:09.303 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.303 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.303 + sudo dmesg -Tw 00:01:09.303 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:09.303 + [[ -x /usr/src/fio-static/fio ]] 00:01:09.303 + export FIO_BIN=/usr/src/fio-static/fio 00:01:09.303 + FIO_BIN=/usr/src/fio-static/fio 00:01:09.303 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:09.303 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:09.303 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:09.303 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.303 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.303 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:09.303 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.303 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.303 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:09.303 19:42:00 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:09.303 19:42:00 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:09.303 19:42:00 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.303 19:42:00 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:09.303 19:42:00 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:09.303 19:42:00 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:09.303 19:42:00 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.303 19:42:00 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:09.303 19:42:00 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:09.303 19:42:00 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:09.303 19:42:00 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:09.303 19:42:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:09.303 19:42:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:09.303 19:42:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:09.303 19:42:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:09.303 19:42:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:09.303 19:42:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.303 19:42:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.303 19:42:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.303 19:42:00 -- paths/export.sh@5 -- $ export PATH 00:01:09.303 19:42:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.303 19:42:00 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:09.303 19:42:00 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:09.303 19:42:00 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732650120.XXXXXX 00:01:09.303 19:42:00 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732650120.jmg3Xr 00:01:09.303 19:42:00 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:09.303 19:42:00 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:09.303 19:42:00 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:09.303 19:42:00 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:09.303 19:42:00 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:09.303 19:42:00 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:09.303 19:42:00 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:09.303 19:42:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.303 19:42:00 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:09.303 19:42:00 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:09.303 19:42:00 -- pm/common@17 -- $ local monitor 00:01:09.303 19:42:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.303 19:42:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.303 19:42:00 -- pm/common@25 -- $ sleep 1 00:01:09.303 19:42:00 -- pm/common@21 -- $ date +%s 00:01:09.303 19:42:00 -- pm/common@21 -- $ date +%s 00:01:09.303 19:42:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732650120 00:01:09.303 19:42:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732650120 00:01:09.570 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732650120_collect-cpu-load.pm.log 00:01:09.570 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732650120_collect-vmstat.pm.log 00:01:10.502 19:42:01 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:10.502 19:42:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:10.502 19:42:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:10.502 19:42:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:10.502 19:42:01 -- spdk/autobuild.sh@16 -- $ date -u 00:01:10.502 Tue Nov 26 07:42:01 PM UTC 2024 00:01:10.502 19:42:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:10.502 v25.01-pre-249-ge43b3b914 00:01:10.502 19:42:01 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:10.502 19:42:01 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:10.502 19:42:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:10.502 19:42:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:10.502 19:42:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.502 ************************************ 00:01:10.502 START TEST asan 00:01:10.502 ************************************ 00:01:10.502 using asan 00:01:10.502 19:42:01 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:10.502 00:01:10.502 real 0m0.000s 00:01:10.502 user 0m0.000s 00:01:10.502 sys 0m0.000s 00:01:10.503 ************************************ 00:01:10.503 END TEST asan 00:01:10.503 ************************************ 00:01:10.503 19:42:01 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:10.503 19:42:01 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:10.503 19:42:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:10.503 19:42:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:10.503 19:42:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:10.503 19:42:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:10.503 19:42:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.503 ************************************ 00:01:10.503 START TEST ubsan 00:01:10.503 ************************************ 00:01:10.503 using ubsan 00:01:10.503 ************************************ 00:01:10.503 END TEST ubsan 00:01:10.503 ************************************ 00:01:10.503 19:42:01 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:10.503 00:01:10.503 real 0m0.000s 00:01:10.503 user 0m0.000s 00:01:10.503 sys 0m0.000s 00:01:10.503 19:42:01 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:10.503 19:42:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:10.503 19:42:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:10.503 19:42:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:10.503 19:42:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:10.503 19:42:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:10.503 19:42:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:10.503 19:42:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:10.503 19:42:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:10.503 19:42:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:10.503 19:42:01 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:10.503 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:10.503 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:11.069 Using 'verbs' RDMA provider 00:01:23.910 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:33.884 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:33.884 Creating mk/config.mk...done. 00:01:33.884 Creating mk/cc.flags.mk...done. 00:01:33.884 Type 'make' to build. 00:01:33.884 19:42:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:33.884 19:42:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:33.884 19:42:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:33.884 19:42:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.884 ************************************ 00:01:33.884 START TEST make 00:01:33.884 ************************************ 00:01:33.884 19:42:23 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:33.884 make[1]: Nothing to be done for 'all'. 00:01:43.848 The Meson build system 00:01:43.848 Version: 1.5.0 00:01:43.848 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:43.848 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:43.848 Build type: native build 00:01:43.848 Program cat found: YES (/usr/bin/cat) 00:01:43.848 Project name: DPDK 00:01:43.848 Project version: 24.03.0 00:01:43.848 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:43.848 C linker for the host machine: cc ld.bfd 2.40-14 00:01:43.848 Host machine cpu family: x86_64 00:01:43.848 Host machine cpu: x86_64 00:01:43.848 Message: ## Building in Developer Mode ## 00:01:43.848 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:43.848 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:43.848 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:43.848 Program python3 found: YES (/usr/bin/python3) 00:01:43.848 Program cat found: YES (/usr/bin/cat) 00:01:43.848 Compiler for C supports arguments -march=native: YES 00:01:43.848 Checking for size of "void *" : 8 00:01:43.848 Checking for size of "void *" : 8 (cached) 00:01:43.848 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:43.848 Library m found: YES 00:01:43.848 Library numa found: YES 00:01:43.848 Has header "numaif.h" : YES 00:01:43.848 Library fdt found: NO 00:01:43.848 Library execinfo found: NO 00:01:43.848 Has header "execinfo.h" : YES 00:01:43.848 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:43.848 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:43.848 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:43.848 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:43.848 Run-time dependency openssl found: YES 3.1.1 00:01:43.848 Run-time dependency libpcap found: YES 1.10.4 00:01:43.848 Has header "pcap.h" with dependency libpcap: YES 00:01:43.848 Compiler for C supports arguments -Wcast-qual: YES 00:01:43.848 Compiler for C supports arguments -Wdeprecated: YES 00:01:43.848 Compiler for C supports arguments -Wformat: YES 00:01:43.848 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:43.848 Compiler for C supports arguments -Wformat-security: NO 00:01:43.848 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.848 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:43.848 Compiler for C supports arguments -Wnested-externs: YES 00:01:43.848 Compiler for C supports arguments -Wold-style-definition: YES 00:01:43.848 Compiler for C supports arguments -Wpointer-arith: YES 00:01:43.848 Compiler for C supports arguments -Wsign-compare: YES 00:01:43.848 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:43.848 Compiler for C supports arguments -Wundef: YES 00:01:43.848 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.848 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:43.848 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:43.848 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.848 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:43.848 Program objdump found: YES (/usr/bin/objdump) 00:01:43.848 Compiler for C supports arguments -mavx512f: YES 00:01:43.848 Checking if "AVX512 checking" compiles: YES 00:01:43.848 Fetching value of define "__SSE4_2__" : 1 00:01:43.848 Fetching value of define "__AES__" : 1 00:01:43.848 Fetching value of define "__AVX__" : 1 00:01:43.848 Fetching value of define "__AVX2__" : 1 00:01:43.848 Fetching value of define "__AVX512BW__" : 1 00:01:43.848 Fetching value of define "__AVX512CD__" : 1 00:01:43.848 Fetching value of define "__AVX512DQ__" : 1 00:01:43.848 Fetching value of define "__AVX512F__" : 1 00:01:43.848 Fetching value of define "__AVX512VL__" : 1 00:01:43.848 Fetching value of define "__PCLMUL__" : 1 00:01:43.848 Fetching value of define "__RDRND__" : 1 00:01:43.848 Fetching value of define "__RDSEED__" : 1 00:01:43.848 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:43.848 Fetching value of define "__znver1__" : (undefined) 00:01:43.848 Fetching value of define "__znver2__" : (undefined) 00:01:43.848 Fetching value of define "__znver3__" : (undefined) 00:01:43.848 Fetching value of define "__znver4__" : (undefined) 00:01:43.848 Library asan found: YES 00:01:43.848 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:43.848 Message: lib/log: Defining dependency "log" 00:01:43.848 Message: lib/kvargs: Defining dependency "kvargs" 00:01:43.848 Message: lib/telemetry: Defining dependency "telemetry" 00:01:43.848 Library rt found: YES 00:01:43.848 Checking for function "getentropy" : NO 00:01:43.848 Message: lib/eal: Defining dependency "eal" 00:01:43.848 Message: lib/ring: Defining dependency "ring" 00:01:43.848 Message: lib/rcu: Defining dependency "rcu" 00:01:43.848 Message: lib/mempool: Defining dependency "mempool" 00:01:43.848 Message: lib/mbuf: Defining dependency "mbuf" 00:01:43.848 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:43.848 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:43.848 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:43.848 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:43.848 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:43.848 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:43.848 Compiler for C supports arguments -mpclmul: YES 00:01:43.848 Compiler for C supports arguments -maes: YES 00:01:43.848 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:43.848 Compiler for C supports arguments -mavx512bw: YES 00:01:43.848 Compiler for C supports arguments -mavx512dq: YES 00:01:43.848 Compiler for C supports arguments -mavx512vl: YES 00:01:43.848 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:43.848 Compiler for C supports arguments -mavx2: YES 00:01:43.848 Compiler for C supports arguments -mavx: YES 00:01:43.848 Message: lib/net: Defining dependency "net" 00:01:43.848 Message: lib/meter: Defining dependency "meter" 00:01:43.848 Message: lib/ethdev: Defining dependency "ethdev" 00:01:43.848 Message: lib/pci: Defining dependency "pci" 00:01:43.848 Message: lib/cmdline: Defining dependency "cmdline" 00:01:43.848 Message: lib/hash: Defining dependency "hash" 00:01:43.848 Message: lib/timer: Defining dependency "timer" 00:01:43.848 Message: lib/compressdev: Defining dependency "compressdev" 00:01:43.848 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:43.848 Message: lib/dmadev: Defining dependency "dmadev" 00:01:43.848 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:43.848 Message: lib/power: Defining dependency "power" 00:01:43.848 Message: lib/reorder: Defining dependency "reorder" 00:01:43.848 Message: lib/security: Defining dependency "security" 00:01:43.848 Has header "linux/userfaultfd.h" : YES 00:01:43.848 Has header "linux/vduse.h" : YES 00:01:43.848 Message: lib/vhost: Defining dependency "vhost" 00:01:43.848 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:43.848 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:43.848 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:43.848 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:43.848 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:43.848 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:43.848 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:43.848 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:43.848 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:43.848 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:43.848 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:43.848 Configuring doxy-api-html.conf using configuration 00:01:43.848 Configuring doxy-api-man.conf using configuration 00:01:43.848 Program mandb found: YES (/usr/bin/mandb) 00:01:43.848 Program sphinx-build found: NO 00:01:43.848 Configuring rte_build_config.h using configuration 00:01:43.848 Message: 00:01:43.848 ================= 00:01:43.848 Applications Enabled 00:01:43.848 ================= 00:01:43.848 00:01:43.848 apps: 00:01:43.848 00:01:43.848 00:01:43.848 Message: 00:01:43.848 ================= 00:01:43.848 Libraries Enabled 00:01:43.848 ================= 00:01:43.848 00:01:43.848 libs: 00:01:43.849 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:43.849 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:43.849 cryptodev, dmadev, power, reorder, security, vhost, 00:01:43.849 00:01:43.849 Message: 00:01:43.849 =============== 00:01:43.849 Drivers Enabled 00:01:43.849 =============== 00:01:43.849 00:01:43.849 common: 00:01:43.849 00:01:43.849 bus: 00:01:43.849 pci, vdev, 00:01:43.849 mempool: 00:01:43.849 ring, 00:01:43.849 dma: 00:01:43.849 00:01:43.849 net: 00:01:43.849 00:01:43.849 crypto: 00:01:43.849 00:01:43.849 compress: 00:01:43.849 00:01:43.849 vdpa: 00:01:43.849 00:01:43.849 00:01:43.849 Message: 00:01:43.849 ================= 00:01:43.849 Content Skipped 00:01:43.849 ================= 00:01:43.849 00:01:43.849 apps: 00:01:43.849 dumpcap: explicitly disabled via build config 00:01:43.849 graph: explicitly disabled via build config 00:01:43.849 pdump: explicitly disabled via build config 00:01:43.849 proc-info: explicitly disabled via build config 00:01:43.849 test-acl: explicitly disabled via build config 00:01:43.849 test-bbdev: explicitly disabled via build config 00:01:43.849 test-cmdline: explicitly disabled via build config 00:01:43.849 test-compress-perf: explicitly disabled via build config 00:01:43.849 test-crypto-perf: explicitly disabled via build config 00:01:43.849 test-dma-perf: explicitly disabled via build config 00:01:43.849 test-eventdev: explicitly disabled via build config 00:01:43.849 test-fib: explicitly disabled via build config 00:01:43.849 test-flow-perf: explicitly disabled via build config 00:01:43.849 test-gpudev: explicitly disabled via build config 00:01:43.849 test-mldev: explicitly disabled via build config 00:01:43.849 test-pipeline: explicitly disabled via build config 00:01:43.849 test-pmd: explicitly disabled via build config 00:01:43.849 test-regex: explicitly disabled via build config 00:01:43.849 test-sad: explicitly disabled via build config 00:01:43.849 test-security-perf: explicitly disabled via build config 00:01:43.849 00:01:43.849 libs: 00:01:43.849 argparse: explicitly disabled via build config 00:01:43.849 metrics: explicitly disabled via build config 00:01:43.849 acl: explicitly disabled via build config 00:01:43.849 bbdev: explicitly disabled via build config 00:01:43.849 bitratestats: explicitly disabled via build config 00:01:43.849 bpf: explicitly disabled via build config 00:01:43.849 cfgfile: explicitly disabled via build config 00:01:43.849 distributor: explicitly disabled via build config 00:01:43.849 efd: explicitly disabled via build config 00:01:43.849 eventdev: explicitly disabled via build config 00:01:43.849 dispatcher: explicitly disabled via build config 00:01:43.849 gpudev: explicitly disabled via build config 00:01:43.849 gro: explicitly disabled via build config 00:01:43.849 gso: explicitly disabled via build config 00:01:43.849 ip_frag: explicitly disabled via build config 00:01:43.849 jobstats: explicitly disabled via build config 00:01:43.849 latencystats: explicitly disabled via build config 00:01:43.849 lpm: explicitly disabled via build config 00:01:43.849 member: explicitly disabled via build config 00:01:43.849 pcapng: explicitly disabled via build config 00:01:43.849 rawdev: explicitly disabled via build config 00:01:43.849 regexdev: explicitly disabled via build config 00:01:43.849 mldev: explicitly disabled via build config 00:01:43.849 rib: explicitly disabled via build config 00:01:43.849 sched: explicitly disabled via build config 00:01:43.849 stack: explicitly disabled via build config 00:01:43.849 ipsec: explicitly disabled via build config 00:01:43.849 pdcp: explicitly disabled via build config 00:01:43.849 fib: explicitly disabled via build config 00:01:43.849 port: explicitly disabled via build config 00:01:43.849 pdump: explicitly disabled via build config 00:01:43.849 table: explicitly disabled via build config 00:01:43.849 pipeline: explicitly disabled via build config 00:01:43.849 graph: explicitly disabled via build config 00:01:43.849 node: explicitly disabled via build config 00:01:43.849 00:01:43.849 drivers: 00:01:43.849 common/cpt: not in enabled drivers build config 00:01:43.849 common/dpaax: not in enabled drivers build config 00:01:43.849 common/iavf: not in enabled drivers build config 00:01:43.849 common/idpf: not in enabled drivers build config 00:01:43.849 common/ionic: not in enabled drivers build config 00:01:43.849 common/mvep: not in enabled drivers build config 00:01:43.849 common/octeontx: not in enabled drivers build config 00:01:43.849 bus/auxiliary: not in enabled drivers build config 00:01:43.849 bus/cdx: not in enabled drivers build config 00:01:43.849 bus/dpaa: not in enabled drivers build config 00:01:43.849 bus/fslmc: not in enabled drivers build config 00:01:43.849 bus/ifpga: not in enabled drivers build config 00:01:43.849 bus/platform: not in enabled drivers build config 00:01:43.849 bus/uacce: not in enabled drivers build config 00:01:43.849 bus/vmbus: not in enabled drivers build config 00:01:43.849 common/cnxk: not in enabled drivers build config 00:01:43.849 common/mlx5: not in enabled drivers build config 00:01:43.849 common/nfp: not in enabled drivers build config 00:01:43.849 common/nitrox: not in enabled drivers build config 00:01:43.849 common/qat: not in enabled drivers build config 00:01:43.849 common/sfc_efx: not in enabled drivers build config 00:01:43.849 mempool/bucket: not in enabled drivers build config 00:01:43.849 mempool/cnxk: not in enabled drivers build config 00:01:43.849 mempool/dpaa: not in enabled drivers build config 00:01:43.849 mempool/dpaa2: not in enabled drivers build config 00:01:43.849 mempool/octeontx: not in enabled drivers build config 00:01:43.849 mempool/stack: not in enabled drivers build config 00:01:43.849 dma/cnxk: not in enabled drivers build config 00:01:43.849 dma/dpaa: not in enabled drivers build config 00:01:43.849 dma/dpaa2: not in enabled drivers build config 00:01:43.849 dma/hisilicon: not in enabled drivers build config 00:01:43.849 dma/idxd: not in enabled drivers build config 00:01:43.849 dma/ioat: not in enabled drivers build config 00:01:43.849 dma/skeleton: not in enabled drivers build config 00:01:43.849 net/af_packet: not in enabled drivers build config 00:01:43.849 net/af_xdp: not in enabled drivers build config 00:01:43.849 net/ark: not in enabled drivers build config 00:01:43.849 net/atlantic: not in enabled drivers build config 00:01:43.849 net/avp: not in enabled drivers build config 00:01:43.849 net/axgbe: not in enabled drivers build config 00:01:43.849 net/bnx2x: not in enabled drivers build config 00:01:43.849 net/bnxt: not in enabled drivers build config 00:01:43.849 net/bonding: not in enabled drivers build config 00:01:43.849 net/cnxk: not in enabled drivers build config 00:01:43.849 net/cpfl: not in enabled drivers build config 00:01:43.849 net/cxgbe: not in enabled drivers build config 00:01:43.849 net/dpaa: not in enabled drivers build config 00:01:43.849 net/dpaa2: not in enabled drivers build config 00:01:43.849 net/e1000: not in enabled drivers build config 00:01:43.849 net/ena: not in enabled drivers build config 00:01:43.849 net/enetc: not in enabled drivers build config 00:01:43.849 net/enetfec: not in enabled drivers build config 00:01:43.849 net/enic: not in enabled drivers build config 00:01:43.849 net/failsafe: not in enabled drivers build config 00:01:43.849 net/fm10k: not in enabled drivers build config 00:01:43.849 net/gve: not in enabled drivers build config 00:01:43.849 net/hinic: not in enabled drivers build config 00:01:43.849 net/hns3: not in enabled drivers build config 00:01:43.849 net/i40e: not in enabled drivers build config 00:01:43.849 net/iavf: not in enabled drivers build config 00:01:43.849 net/ice: not in enabled drivers build config 00:01:43.849 net/idpf: not in enabled drivers build config 00:01:43.849 net/igc: not in enabled drivers build config 00:01:43.849 net/ionic: not in enabled drivers build config 00:01:43.849 net/ipn3ke: not in enabled drivers build config 00:01:43.849 net/ixgbe: not in enabled drivers build config 00:01:43.849 net/mana: not in enabled drivers build config 00:01:43.849 net/memif: not in enabled drivers build config 00:01:43.849 net/mlx4: not in enabled drivers build config 00:01:43.849 net/mlx5: not in enabled drivers build config 00:01:43.849 net/mvneta: not in enabled drivers build config 00:01:43.849 net/mvpp2: not in enabled drivers build config 00:01:43.849 net/netvsc: not in enabled drivers build config 00:01:43.849 net/nfb: not in enabled drivers build config 00:01:43.849 net/nfp: not in enabled drivers build config 00:01:43.849 net/ngbe: not in enabled drivers build config 00:01:43.849 net/null: not in enabled drivers build config 00:01:43.849 net/octeontx: not in enabled drivers build config 00:01:43.849 net/octeon_ep: not in enabled drivers build config 00:01:43.849 net/pcap: not in enabled drivers build config 00:01:43.849 net/pfe: not in enabled drivers build config 00:01:43.849 net/qede: not in enabled drivers build config 00:01:43.849 net/ring: not in enabled drivers build config 00:01:43.849 net/sfc: not in enabled drivers build config 00:01:43.849 net/softnic: not in enabled drivers build config 00:01:43.849 net/tap: not in enabled drivers build config 00:01:43.849 net/thunderx: not in enabled drivers build config 00:01:43.849 net/txgbe: not in enabled drivers build config 00:01:43.849 net/vdev_netvsc: not in enabled drivers build config 00:01:43.849 net/vhost: not in enabled drivers build config 00:01:43.849 net/virtio: not in enabled drivers build config 00:01:43.849 net/vmxnet3: not in enabled drivers build config 00:01:43.849 raw/*: missing internal dependency, "rawdev" 00:01:43.849 crypto/armv8: not in enabled drivers build config 00:01:43.849 crypto/bcmfs: not in enabled drivers build config 00:01:43.849 crypto/caam_jr: not in enabled drivers build config 00:01:43.849 crypto/ccp: not in enabled drivers build config 00:01:43.849 crypto/cnxk: not in enabled drivers build config 00:01:43.849 crypto/dpaa_sec: not in enabled drivers build config 00:01:43.849 crypto/dpaa2_sec: not in enabled drivers build config 00:01:43.849 crypto/ipsec_mb: not in enabled drivers build config 00:01:43.849 crypto/mlx5: not in enabled drivers build config 00:01:43.849 crypto/mvsam: not in enabled drivers build config 00:01:43.849 crypto/nitrox: not in enabled drivers build config 00:01:43.849 crypto/null: not in enabled drivers build config 00:01:43.850 crypto/octeontx: not in enabled drivers build config 00:01:43.850 crypto/openssl: not in enabled drivers build config 00:01:43.850 crypto/scheduler: not in enabled drivers build config 00:01:43.850 crypto/uadk: not in enabled drivers build config 00:01:43.850 crypto/virtio: not in enabled drivers build config 00:01:43.850 compress/isal: not in enabled drivers build config 00:01:43.850 compress/mlx5: not in enabled drivers build config 00:01:43.850 compress/nitrox: not in enabled drivers build config 00:01:43.850 compress/octeontx: not in enabled drivers build config 00:01:43.850 compress/zlib: not in enabled drivers build config 00:01:43.850 regex/*: missing internal dependency, "regexdev" 00:01:43.850 ml/*: missing internal dependency, "mldev" 00:01:43.850 vdpa/ifc: not in enabled drivers build config 00:01:43.850 vdpa/mlx5: not in enabled drivers build config 00:01:43.850 vdpa/nfp: not in enabled drivers build config 00:01:43.850 vdpa/sfc: not in enabled drivers build config 00:01:43.850 event/*: missing internal dependency, "eventdev" 00:01:43.850 baseband/*: missing internal dependency, "bbdev" 00:01:43.850 gpu/*: missing internal dependency, "gpudev" 00:01:43.850 00:01:43.850 00:01:43.850 Build targets in project: 84 00:01:43.850 00:01:43.850 DPDK 24.03.0 00:01:43.850 00:01:43.850 User defined options 00:01:43.850 buildtype : debug 00:01:43.850 default_library : shared 00:01:43.850 libdir : lib 00:01:43.850 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:43.850 b_sanitize : address 00:01:43.850 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:43.850 c_link_args : 00:01:43.850 cpu_instruction_set: native 00:01:43.850 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:43.850 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:43.850 enable_docs : false 00:01:43.850 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:43.850 enable_kmods : false 00:01:43.850 max_lcores : 128 00:01:43.850 tests : false 00:01:43.850 00:01:43.850 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.850 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:43.850 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:43.850 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:43.850 [3/267] Linking static target lib/librte_kvargs.a 00:01:43.850 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:43.850 [5/267] Linking static target lib/librte_log.a 00:01:43.850 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:44.155 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:44.155 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:44.155 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:44.155 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:44.155 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:44.155 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:44.155 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:44.155 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.415 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:44.415 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:44.415 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:44.415 [18/267] Linking static target lib/librte_telemetry.a 00:01:44.673 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:44.673 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:44.673 [21/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.673 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:44.673 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:44.673 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:44.673 [25/267] Linking target lib/librte_log.so.24.1 00:01:44.673 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:44.673 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:44.931 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:44.931 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:44.931 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:44.931 [31/267] Linking target lib/librte_kvargs.so.24.1 00:01:45.189 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:45.189 [33/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:45.189 [34/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.189 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:45.189 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:45.189 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:45.189 [38/267] Linking target lib/librte_telemetry.so.24.1 00:01:45.189 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:45.189 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:45.189 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:45.449 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:45.449 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:45.449 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:45.449 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:45.449 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:45.707 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:45.707 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:45.707 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:45.707 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:45.967 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:45.967 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:45.967 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:45.967 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:46.227 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:46.227 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:46.227 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:46.227 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:46.227 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:46.227 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:46.227 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:46.227 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:46.485 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:46.485 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:46.485 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:46.485 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:46.743 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:46.743 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:46.743 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:46.743 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.000 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:47.000 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.000 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.000 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.000 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.000 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.299 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:47.299 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.299 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:47.299 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.299 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:47.299 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:47.299 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:47.560 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:47.560 [85/267] Linking static target lib/librte_ring.a 00:01:47.560 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:47.560 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:47.560 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:47.817 [89/267] Linking static target lib/librte_eal.a 00:01:47.817 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:47.817 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:47.817 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.075 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.075 [94/267] Linking static target lib/librte_mempool.a 00:01:48.075 [95/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.075 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.075 [97/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:48.075 [98/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.075 [99/267] Linking static target lib/librte_rcu.a 00:01:48.075 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.333 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:48.333 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:48.333 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:48.590 [104/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.590 [105/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.590 [106/267] Linking static target lib/librte_net.a 00:01:48.590 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:48.590 [108/267] Linking static target lib/librte_meter.a 00:01:48.590 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:48.590 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:48.848 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:48.848 [112/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.848 [113/267] Linking static target lib/librte_mbuf.a 00:01:48.848 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:48.848 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.848 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.175 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:49.175 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:49.175 [119/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.175 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:49.455 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:49.455 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:49.711 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:49.711 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:49.711 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:49.711 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:49.711 [127/267] Linking static target lib/librte_pci.a 00:01:49.711 [128/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.711 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:49.711 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:49.712 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:49.712 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:49.968 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:49.968 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:49.968 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:49.968 [136/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.968 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:49.968 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:49.968 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:49.968 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:49.968 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:49.968 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:49.968 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:50.225 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:50.225 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:50.225 [146/267] Linking static target lib/librte_cmdline.a 00:01:50.225 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:50.482 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:50.482 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:50.482 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:50.482 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:50.738 [152/267] Linking static target lib/librte_timer.a 00:01:50.738 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:50.738 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:50.738 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:50.738 [156/267] Linking static target lib/librte_compressdev.a 00:01:50.995 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:50.995 [158/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:50.995 [159/267] Linking static target lib/librte_hash.a 00:01:50.995 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:50.995 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:50.995 [162/267] Linking static target lib/librte_ethdev.a 00:01:50.995 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:50.995 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.251 [165/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.251 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:51.251 [167/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.251 [168/267] Linking static target lib/librte_dmadev.a 00:01:51.508 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:51.508 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:51.508 [171/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:51.508 [172/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.508 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.764 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.764 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:51.764 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.021 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.021 [178/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.021 [179/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.021 [180/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.021 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.021 [182/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.021 [183/267] Linking static target lib/librte_cryptodev.a 00:01:52.021 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.021 [185/267] Linking static target lib/librte_power.a 00:01:52.278 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.534 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.534 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.534 [189/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.534 [190/267] Linking static target lib/librte_reorder.a 00:01:52.791 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:52.791 [192/267] Linking static target lib/librte_security.a 00:01:53.048 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:53.048 [194/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.048 [195/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.305 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.305 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:53.305 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:53.562 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.562 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:53.562 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.819 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:53.819 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.819 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:53.819 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:53.819 [206/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:53.819 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.126 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:54.126 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:54.126 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:54.126 [211/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.126 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.126 [213/267] Linking static target drivers/librte_bus_vdev.a 00:01:54.126 [214/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.422 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.422 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.422 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.422 [218/267] Linking static target drivers/librte_bus_pci.a 00:01:54.422 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:54.422 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:54.422 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.680 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:54.680 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.680 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.680 [225/267] Linking static target drivers/librte_mempool_ring.a 00:01:54.680 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.937 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:56.306 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.306 [229/267] Linking target lib/librte_eal.so.24.1 00:01:56.564 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:56.564 [231/267] Linking target lib/librte_meter.so.24.1 00:01:56.564 [232/267] Linking target lib/librte_ring.so.24.1 00:01:56.564 [233/267] Linking target lib/librte_timer.so.24.1 00:01:56.564 [234/267] Linking target lib/librte_pci.so.24.1 00:01:56.564 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:56.564 [236/267] Linking target lib/librte_dmadev.so.24.1 00:01:56.564 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:56.564 [238/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:56.564 [239/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:56.821 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:56.821 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:56.821 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:56.821 [243/267] Linking target lib/librte_rcu.so.24.1 00:01:56.821 [244/267] Linking target lib/librte_mempool.so.24.1 00:01:56.821 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:56.821 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:56.821 [247/267] Linking target lib/librte_mbuf.so.24.1 00:01:56.821 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:57.079 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:57.079 [250/267] Linking target lib/librte_net.so.24.1 00:01:57.079 [251/267] Linking target lib/librte_compressdev.so.24.1 00:01:57.079 [252/267] Linking target lib/librte_reorder.so.24.1 00:01:57.079 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:01:57.079 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:57.337 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:57.337 [256/267] Linking target lib/librte_hash.so.24.1 00:01:57.337 [257/267] Linking target lib/librte_cmdline.so.24.1 00:01:57.337 [258/267] Linking target lib/librte_security.so.24.1 00:01:57.337 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.337 [260/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:57.337 [261/267] Linking target lib/librte_ethdev.so.24.1 00:01:57.593 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:57.593 [263/267] Linking target lib/librte_power.so.24.1 00:01:58.580 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:58.580 [265/267] Linking static target lib/librte_vhost.a 00:01:59.951 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.951 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:59.951 INFO: autodetecting backend as ninja 00:01:59.951 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:18.016 CC lib/log/log_flags.o 00:02:18.016 CC lib/log/log.o 00:02:18.016 CC lib/log/log_deprecated.o 00:02:18.016 CC lib/ut/ut.o 00:02:18.016 CC lib/ut_mock/mock.o 00:02:18.016 LIB libspdk_ut_mock.a 00:02:18.016 LIB libspdk_ut.a 00:02:18.016 LIB libspdk_log.a 00:02:18.016 SO libspdk_ut.so.2.0 00:02:18.016 SO libspdk_ut_mock.so.6.0 00:02:18.016 SO libspdk_log.so.7.1 00:02:18.016 SYMLINK libspdk_ut_mock.so 00:02:18.016 SYMLINK libspdk_ut.so 00:02:18.016 SYMLINK libspdk_log.so 00:02:18.016 CC lib/dma/dma.o 00:02:18.017 CC lib/util/base64.o 00:02:18.017 CC lib/util/cpuset.o 00:02:18.017 CC lib/util/crc16.o 00:02:18.017 CC lib/util/bit_array.o 00:02:18.017 CC lib/ioat/ioat.o 00:02:18.017 CC lib/util/crc32.o 00:02:18.017 CC lib/util/crc32c.o 00:02:18.017 CXX lib/trace_parser/trace.o 00:02:18.017 CC lib/vfio_user/host/vfio_user_pci.o 00:02:18.017 CC lib/util/crc32_ieee.o 00:02:18.017 CC lib/util/crc64.o 00:02:18.017 CC lib/util/dif.o 00:02:18.017 CC lib/vfio_user/host/vfio_user.o 00:02:18.017 LIB libspdk_dma.a 00:02:18.017 CC lib/util/fd.o 00:02:18.017 CC lib/util/fd_group.o 00:02:18.017 SO libspdk_dma.so.5.0 00:02:18.017 CC lib/util/file.o 00:02:18.017 LIB libspdk_ioat.a 00:02:18.017 CC lib/util/hexlify.o 00:02:18.017 SO libspdk_ioat.so.7.0 00:02:18.017 SYMLINK libspdk_dma.so 00:02:18.017 CC lib/util/iov.o 00:02:18.017 SYMLINK libspdk_ioat.so 00:02:18.017 CC lib/util/math.o 00:02:18.017 CC lib/util/net.o 00:02:18.017 CC lib/util/pipe.o 00:02:18.017 LIB libspdk_vfio_user.a 00:02:18.017 CC lib/util/strerror_tls.o 00:02:18.017 SO libspdk_vfio_user.so.5.0 00:02:18.017 CC lib/util/string.o 00:02:18.017 CC lib/util/uuid.o 00:02:18.017 CC lib/util/xor.o 00:02:18.017 SYMLINK libspdk_vfio_user.so 00:02:18.017 CC lib/util/zipf.o 00:02:18.017 CC lib/util/md5.o 00:02:18.017 LIB libspdk_util.a 00:02:18.017 LIB libspdk_trace_parser.a 00:02:18.017 SO libspdk_util.so.10.1 00:02:18.017 SO libspdk_trace_parser.so.6.0 00:02:18.017 SYMLINK libspdk_trace_parser.so 00:02:18.017 SYMLINK libspdk_util.so 00:02:18.017 CC lib/conf/conf.o 00:02:18.017 CC lib/json/json_parse.o 00:02:18.017 CC lib/json/json_util.o 00:02:18.017 CC lib/json/json_write.o 00:02:18.017 CC lib/rdma_utils/rdma_utils.o 00:02:18.017 CC lib/idxd/idxd.o 00:02:18.017 CC lib/vmd/led.o 00:02:18.017 CC lib/vmd/vmd.o 00:02:18.017 CC lib/idxd/idxd_user.o 00:02:18.017 CC lib/env_dpdk/env.o 00:02:18.275 CC lib/idxd/idxd_kernel.o 00:02:18.275 LIB libspdk_conf.a 00:02:18.275 SO libspdk_conf.so.6.0 00:02:18.275 CC lib/env_dpdk/memory.o 00:02:18.275 SYMLINK libspdk_conf.so 00:02:18.275 CC lib/env_dpdk/pci.o 00:02:18.275 CC lib/env_dpdk/init.o 00:02:18.275 CC lib/env_dpdk/threads.o 00:02:18.531 LIB libspdk_rdma_utils.a 00:02:18.531 LIB libspdk_json.a 00:02:18.531 CC lib/env_dpdk/pci_ioat.o 00:02:18.531 SO libspdk_rdma_utils.so.1.0 00:02:18.531 SO libspdk_json.so.6.0 00:02:18.531 SYMLINK libspdk_rdma_utils.so 00:02:18.531 SYMLINK libspdk_json.so 00:02:18.531 CC lib/env_dpdk/pci_virtio.o 00:02:18.531 CC lib/env_dpdk/pci_vmd.o 00:02:18.531 CC lib/env_dpdk/pci_idxd.o 00:02:18.531 CC lib/env_dpdk/pci_event.o 00:02:18.531 CC lib/env_dpdk/sigbus_handler.o 00:02:18.531 CC lib/env_dpdk/pci_dpdk.o 00:02:18.531 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:18.788 LIB libspdk_idxd.a 00:02:18.788 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:18.788 SO libspdk_idxd.so.12.1 00:02:18.788 SYMLINK libspdk_idxd.so 00:02:18.788 LIB libspdk_vmd.a 00:02:18.788 SO libspdk_vmd.so.6.0 00:02:18.788 CC lib/rdma_provider/common.o 00:02:18.788 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:18.788 SYMLINK libspdk_vmd.so 00:02:19.045 CC lib/jsonrpc/jsonrpc_server.o 00:02:19.045 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:19.045 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:19.045 CC lib/jsonrpc/jsonrpc_client.o 00:02:19.045 LIB libspdk_rdma_provider.a 00:02:19.045 SO libspdk_rdma_provider.so.7.0 00:02:19.303 LIB libspdk_jsonrpc.a 00:02:19.303 SYMLINK libspdk_rdma_provider.so 00:02:19.303 SO libspdk_jsonrpc.so.6.0 00:02:19.303 SYMLINK libspdk_jsonrpc.so 00:02:19.303 LIB libspdk_env_dpdk.a 00:02:19.560 SO libspdk_env_dpdk.so.15.1 00:02:19.560 CC lib/rpc/rpc.o 00:02:19.560 SYMLINK libspdk_env_dpdk.so 00:02:19.816 LIB libspdk_rpc.a 00:02:19.816 SO libspdk_rpc.so.6.0 00:02:19.816 SYMLINK libspdk_rpc.so 00:02:20.073 CC lib/notify/notify.o 00:02:20.073 CC lib/notify/notify_rpc.o 00:02:20.073 CC lib/keyring/keyring.o 00:02:20.073 CC lib/trace/trace.o 00:02:20.073 CC lib/keyring/keyring_rpc.o 00:02:20.073 CC lib/trace/trace_flags.o 00:02:20.073 CC lib/trace/trace_rpc.o 00:02:20.073 LIB libspdk_notify.a 00:02:20.073 SO libspdk_notify.so.6.0 00:02:20.331 SYMLINK libspdk_notify.so 00:02:20.331 LIB libspdk_keyring.a 00:02:20.331 LIB libspdk_trace.a 00:02:20.331 SO libspdk_keyring.so.2.0 00:02:20.331 SO libspdk_trace.so.11.0 00:02:20.331 SYMLINK libspdk_keyring.so 00:02:20.331 SYMLINK libspdk_trace.so 00:02:20.588 CC lib/thread/iobuf.o 00:02:20.588 CC lib/thread/thread.o 00:02:20.588 CC lib/sock/sock.o 00:02:20.588 CC lib/sock/sock_rpc.o 00:02:20.846 LIB libspdk_sock.a 00:02:21.104 SO libspdk_sock.so.10.0 00:02:21.104 SYMLINK libspdk_sock.so 00:02:21.405 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:21.405 CC lib/nvme/nvme_ctrlr.o 00:02:21.405 CC lib/nvme/nvme_fabric.o 00:02:21.405 CC lib/nvme/nvme_ns.o 00:02:21.405 CC lib/nvme/nvme_ns_cmd.o 00:02:21.405 CC lib/nvme/nvme_pcie.o 00:02:21.405 CC lib/nvme/nvme_pcie_common.o 00:02:21.405 CC lib/nvme/nvme.o 00:02:21.405 CC lib/nvme/nvme_qpair.o 00:02:21.968 CC lib/nvme/nvme_quirks.o 00:02:21.968 CC lib/nvme/nvme_transport.o 00:02:21.968 CC lib/nvme/nvme_discovery.o 00:02:21.968 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:21.968 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:21.968 CC lib/nvme/nvme_tcp.o 00:02:22.226 CC lib/nvme/nvme_opal.o 00:02:22.226 CC lib/nvme/nvme_io_msg.o 00:02:22.226 LIB libspdk_thread.a 00:02:22.226 CC lib/nvme/nvme_poll_group.o 00:02:22.226 SO libspdk_thread.so.11.0 00:02:22.226 SYMLINK libspdk_thread.so 00:02:22.484 CC lib/nvme/nvme_zns.o 00:02:22.484 CC lib/nvme/nvme_stubs.o 00:02:22.484 CC lib/nvme/nvme_auth.o 00:02:22.484 CC lib/accel/accel.o 00:02:22.484 CC lib/blob/blobstore.o 00:02:22.741 CC lib/virtio/virtio.o 00:02:22.741 CC lib/init/json_config.o 00:02:22.741 CC lib/fsdev/fsdev.o 00:02:22.998 CC lib/fsdev/fsdev_io.o 00:02:22.998 CC lib/virtio/virtio_vhost_user.o 00:02:22.998 CC lib/init/subsystem.o 00:02:23.255 CC lib/accel/accel_rpc.o 00:02:23.255 CC lib/nvme/nvme_cuse.o 00:02:23.255 CC lib/init/subsystem_rpc.o 00:02:23.255 CC lib/virtio/virtio_vfio_user.o 00:02:23.512 CC lib/virtio/virtio_pci.o 00:02:23.512 CC lib/init/rpc.o 00:02:23.512 CC lib/accel/accel_sw.o 00:02:23.512 CC lib/nvme/nvme_rdma.o 00:02:23.512 CC lib/fsdev/fsdev_rpc.o 00:02:23.512 CC lib/blob/request.o 00:02:23.512 CC lib/blob/zeroes.o 00:02:23.512 LIB libspdk_init.a 00:02:23.512 SO libspdk_init.so.6.0 00:02:23.512 CC lib/blob/blob_bs_dev.o 00:02:23.512 SYMLINK libspdk_init.so 00:02:23.512 LIB libspdk_fsdev.a 00:02:23.770 SO libspdk_fsdev.so.2.0 00:02:23.770 LIB libspdk_virtio.a 00:02:23.770 SYMLINK libspdk_fsdev.so 00:02:23.770 SO libspdk_virtio.so.7.0 00:02:23.770 LIB libspdk_accel.a 00:02:23.770 CC lib/event/app.o 00:02:23.770 CC lib/event/reactor.o 00:02:23.770 SO libspdk_accel.so.16.0 00:02:23.770 SYMLINK libspdk_virtio.so 00:02:23.770 CC lib/event/log_rpc.o 00:02:23.770 CC lib/event/app_rpc.o 00:02:23.770 CC lib/event/scheduler_static.o 00:02:23.770 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:23.770 SYMLINK libspdk_accel.so 00:02:24.028 CC lib/bdev/bdev.o 00:02:24.028 CC lib/bdev/bdev_zone.o 00:02:24.028 CC lib/bdev/bdev_rpc.o 00:02:24.028 CC lib/bdev/part.o 00:02:24.028 CC lib/bdev/scsi_nvme.o 00:02:24.285 LIB libspdk_event.a 00:02:24.285 SO libspdk_event.so.14.0 00:02:24.285 SYMLINK libspdk_event.so 00:02:24.543 LIB libspdk_fuse_dispatcher.a 00:02:24.543 SO libspdk_fuse_dispatcher.so.1.0 00:02:24.543 SYMLINK libspdk_fuse_dispatcher.so 00:02:24.801 LIB libspdk_nvme.a 00:02:25.059 SO libspdk_nvme.so.15.0 00:02:25.315 SYMLINK libspdk_nvme.so 00:02:25.573 LIB libspdk_blob.a 00:02:25.573 SO libspdk_blob.so.12.0 00:02:25.573 SYMLINK libspdk_blob.so 00:02:25.830 CC lib/lvol/lvol.o 00:02:25.830 CC lib/blobfs/blobfs.o 00:02:25.830 CC lib/blobfs/tree.o 00:02:26.761 LIB libspdk_bdev.a 00:02:26.761 SO libspdk_bdev.so.17.0 00:02:26.761 LIB libspdk_blobfs.a 00:02:26.761 SO libspdk_blobfs.so.11.0 00:02:26.761 SYMLINK libspdk_bdev.so 00:02:26.761 SYMLINK libspdk_blobfs.so 00:02:26.761 LIB libspdk_lvol.a 00:02:26.761 SO libspdk_lvol.so.11.0 00:02:27.018 SYMLINK libspdk_lvol.so 00:02:27.018 CC lib/ftl/ftl_core.o 00:02:27.018 CC lib/ftl/ftl_io.o 00:02:27.018 CC lib/ftl/ftl_layout.o 00:02:27.018 CC lib/ftl/ftl_debug.o 00:02:27.018 CC lib/ftl/ftl_sb.o 00:02:27.018 CC lib/nbd/nbd.o 00:02:27.018 CC lib/ftl/ftl_init.o 00:02:27.018 CC lib/scsi/dev.o 00:02:27.018 CC lib/ublk/ublk.o 00:02:27.018 CC lib/nvmf/ctrlr.o 00:02:27.018 CC lib/scsi/lun.o 00:02:27.018 CC lib/ftl/ftl_l2p.o 00:02:27.276 CC lib/ublk/ublk_rpc.o 00:02:27.276 CC lib/nvmf/ctrlr_discovery.o 00:02:27.276 CC lib/nvmf/ctrlr_bdev.o 00:02:27.276 CC lib/nbd/nbd_rpc.o 00:02:27.276 CC lib/scsi/port.o 00:02:27.276 CC lib/scsi/scsi.o 00:02:27.276 CC lib/scsi/scsi_bdev.o 00:02:27.276 CC lib/ftl/ftl_l2p_flat.o 00:02:27.276 CC lib/scsi/scsi_pr.o 00:02:27.533 LIB libspdk_nbd.a 00:02:27.533 CC lib/scsi/scsi_rpc.o 00:02:27.533 SO libspdk_nbd.so.7.0 00:02:27.533 CC lib/scsi/task.o 00:02:27.533 LIB libspdk_ublk.a 00:02:27.533 SYMLINK libspdk_nbd.so 00:02:27.533 CC lib/nvmf/subsystem.o 00:02:27.534 SO libspdk_ublk.so.3.0 00:02:27.534 CC lib/ftl/ftl_nv_cache.o 00:02:27.534 CC lib/ftl/ftl_band.o 00:02:27.534 SYMLINK libspdk_ublk.so 00:02:27.534 CC lib/ftl/ftl_band_ops.o 00:02:27.534 CC lib/ftl/ftl_writer.o 00:02:27.534 CC lib/ftl/ftl_rq.o 00:02:27.791 CC lib/nvmf/nvmf.o 00:02:27.791 CC lib/ftl/ftl_reloc.o 00:02:27.791 CC lib/ftl/ftl_l2p_cache.o 00:02:27.791 LIB libspdk_scsi.a 00:02:27.791 CC lib/ftl/ftl_p2l.o 00:02:27.791 CC lib/ftl/ftl_p2l_log.o 00:02:27.791 SO libspdk_scsi.so.9.0 00:02:28.048 SYMLINK libspdk_scsi.so 00:02:28.048 CC lib/nvmf/nvmf_rpc.o 00:02:28.048 CC lib/ftl/mngt/ftl_mngt.o 00:02:28.048 CC lib/nvmf/transport.o 00:02:28.048 CC lib/nvmf/tcp.o 00:02:28.306 CC lib/iscsi/conn.o 00:02:28.306 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:28.306 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:28.563 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:28.563 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:28.563 CC lib/vhost/vhost.o 00:02:28.563 CC lib/vhost/vhost_rpc.o 00:02:28.819 CC lib/vhost/vhost_scsi.o 00:02:28.819 CC lib/iscsi/init_grp.o 00:02:28.819 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:28.819 CC lib/iscsi/iscsi.o 00:02:28.819 CC lib/iscsi/param.o 00:02:28.819 CC lib/iscsi/portal_grp.o 00:02:28.819 CC lib/vhost/vhost_blk.o 00:02:29.076 CC lib/vhost/rte_vhost_user.o 00:02:29.076 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:29.076 CC lib/iscsi/tgt_node.o 00:02:29.077 CC lib/iscsi/iscsi_subsystem.o 00:02:29.334 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:29.334 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:29.334 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:29.334 CC lib/iscsi/iscsi_rpc.o 00:02:29.591 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:29.591 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:29.591 CC lib/iscsi/task.o 00:02:29.591 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:29.591 CC lib/ftl/utils/ftl_conf.o 00:02:29.848 CC lib/ftl/utils/ftl_md.o 00:02:29.848 CC lib/ftl/utils/ftl_mempool.o 00:02:29.848 CC lib/ftl/utils/ftl_bitmap.o 00:02:29.848 CC lib/nvmf/stubs.o 00:02:29.848 CC lib/ftl/utils/ftl_property.o 00:02:29.848 LIB libspdk_vhost.a 00:02:29.848 CC lib/nvmf/mdns_server.o 00:02:29.848 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:29.848 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:29.848 SO libspdk_vhost.so.8.0 00:02:29.848 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:29.848 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:30.106 SYMLINK libspdk_vhost.so 00:02:30.106 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:30.106 CC lib/nvmf/rdma.o 00:02:30.106 CC lib/nvmf/auth.o 00:02:30.106 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:30.106 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:30.106 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:30.106 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:30.106 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:30.106 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:30.106 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:30.106 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:30.364 CC lib/ftl/base/ftl_base_dev.o 00:02:30.364 LIB libspdk_iscsi.a 00:02:30.364 CC lib/ftl/base/ftl_base_bdev.o 00:02:30.364 CC lib/ftl/ftl_trace.o 00:02:30.364 SO libspdk_iscsi.so.8.0 00:02:30.621 SYMLINK libspdk_iscsi.so 00:02:30.621 LIB libspdk_ftl.a 00:02:30.879 SO libspdk_ftl.so.9.0 00:02:30.879 SYMLINK libspdk_ftl.so 00:02:31.811 LIB libspdk_nvmf.a 00:02:32.068 SO libspdk_nvmf.so.20.0 00:02:32.325 SYMLINK libspdk_nvmf.so 00:02:32.583 CC module/env_dpdk/env_dpdk_rpc.o 00:02:32.583 CC module/fsdev/aio/fsdev_aio.o 00:02:32.583 CC module/keyring/linux/keyring.o 00:02:32.583 CC module/keyring/file/keyring.o 00:02:32.583 CC module/accel/error/accel_error.o 00:02:32.583 CC module/sock/posix/posix.o 00:02:32.583 CC module/accel/dsa/accel_dsa.o 00:02:32.583 CC module/accel/ioat/accel_ioat.o 00:02:32.583 CC module/blob/bdev/blob_bdev.o 00:02:32.583 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:32.583 LIB libspdk_env_dpdk_rpc.a 00:02:32.583 SO libspdk_env_dpdk_rpc.so.6.0 00:02:32.583 SYMLINK libspdk_env_dpdk_rpc.so 00:02:32.583 CC module/keyring/file/keyring_rpc.o 00:02:32.583 CC module/accel/ioat/accel_ioat_rpc.o 00:02:32.583 CC module/keyring/linux/keyring_rpc.o 00:02:32.841 LIB libspdk_scheduler_dynamic.a 00:02:32.841 SO libspdk_scheduler_dynamic.so.4.0 00:02:32.841 LIB libspdk_keyring_file.a 00:02:32.841 LIB libspdk_accel_ioat.a 00:02:32.841 CC module/accel/error/accel_error_rpc.o 00:02:32.841 LIB libspdk_blob_bdev.a 00:02:32.841 SO libspdk_accel_ioat.so.6.0 00:02:32.841 SO libspdk_keyring_file.so.2.0 00:02:32.841 SO libspdk_blob_bdev.so.12.0 00:02:32.841 LIB libspdk_keyring_linux.a 00:02:32.841 SYMLINK libspdk_scheduler_dynamic.so 00:02:32.841 SO libspdk_keyring_linux.so.1.0 00:02:32.841 SYMLINK libspdk_blob_bdev.so 00:02:32.841 SYMLINK libspdk_keyring_file.so 00:02:32.841 SYMLINK libspdk_accel_ioat.so 00:02:32.841 CC module/accel/dsa/accel_dsa_rpc.o 00:02:32.841 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:32.841 CC module/accel/iaa/accel_iaa.o 00:02:32.841 SYMLINK libspdk_keyring_linux.so 00:02:32.841 LIB libspdk_accel_error.a 00:02:32.841 SO libspdk_accel_error.so.2.0 00:02:33.099 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:33.099 CC module/fsdev/aio/linux_aio_mgr.o 00:02:33.099 SYMLINK libspdk_accel_error.so 00:02:33.099 CC module/accel/iaa/accel_iaa_rpc.o 00:02:33.099 CC module/scheduler/gscheduler/gscheduler.o 00:02:33.099 LIB libspdk_accel_dsa.a 00:02:33.099 SO libspdk_accel_dsa.so.5.0 00:02:33.099 LIB libspdk_scheduler_dpdk_governor.a 00:02:33.099 LIB libspdk_accel_iaa.a 00:02:33.099 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:33.099 CC module/bdev/delay/vbdev_delay.o 00:02:33.099 SYMLINK libspdk_accel_dsa.so 00:02:33.099 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:33.099 LIB libspdk_scheduler_gscheduler.a 00:02:33.099 SO libspdk_accel_iaa.so.3.0 00:02:33.099 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:33.099 CC module/blobfs/bdev/blobfs_bdev.o 00:02:33.099 SO libspdk_scheduler_gscheduler.so.4.0 00:02:33.099 LIB libspdk_fsdev_aio.a 00:02:33.356 SYMLINK libspdk_accel_iaa.so 00:02:33.356 SO libspdk_fsdev_aio.so.1.0 00:02:33.356 SYMLINK libspdk_scheduler_gscheduler.so 00:02:33.356 CC module/bdev/error/vbdev_error.o 00:02:33.356 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:33.356 SYMLINK libspdk_fsdev_aio.so 00:02:33.356 CC module/bdev/error/vbdev_error_rpc.o 00:02:33.356 CC module/bdev/lvol/vbdev_lvol.o 00:02:33.356 CC module/bdev/gpt/gpt.o 00:02:33.356 LIB libspdk_sock_posix.a 00:02:33.356 SO libspdk_sock_posix.so.6.0 00:02:33.356 CC module/bdev/null/bdev_null.o 00:02:33.356 CC module/bdev/malloc/bdev_malloc.o 00:02:33.356 LIB libspdk_blobfs_bdev.a 00:02:33.356 CC module/bdev/null/bdev_null_rpc.o 00:02:33.356 SYMLINK libspdk_sock_posix.so 00:02:33.356 SO libspdk_blobfs_bdev.so.6.0 00:02:33.356 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:33.356 CC module/bdev/nvme/bdev_nvme.o 00:02:33.614 SYMLINK libspdk_blobfs_bdev.so 00:02:33.614 LIB libspdk_bdev_error.a 00:02:33.614 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:33.614 LIB libspdk_bdev_delay.a 00:02:33.614 CC module/bdev/gpt/vbdev_gpt.o 00:02:33.614 SO libspdk_bdev_error.so.6.0 00:02:33.614 SO libspdk_bdev_delay.so.6.0 00:02:33.614 SYMLINK libspdk_bdev_error.so 00:02:33.614 CC module/bdev/nvme/nvme_rpc.o 00:02:33.614 LIB libspdk_bdev_null.a 00:02:33.614 CC module/bdev/nvme/bdev_mdns_client.o 00:02:33.614 SYMLINK libspdk_bdev_delay.so 00:02:33.614 CC module/bdev/nvme/vbdev_opal.o 00:02:33.614 SO libspdk_bdev_null.so.6.0 00:02:33.614 SYMLINK libspdk_bdev_null.so 00:02:33.614 LIB libspdk_bdev_malloc.a 00:02:33.614 CC module/bdev/passthru/vbdev_passthru.o 00:02:33.614 SO libspdk_bdev_malloc.so.6.0 00:02:33.872 LIB libspdk_bdev_gpt.a 00:02:33.872 SO libspdk_bdev_gpt.so.6.0 00:02:33.872 SYMLINK libspdk_bdev_malloc.so 00:02:33.872 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:33.872 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:33.872 CC module/bdev/raid/bdev_raid.o 00:02:33.872 SYMLINK libspdk_bdev_gpt.so 00:02:33.872 CC module/bdev/raid/bdev_raid_rpc.o 00:02:33.872 CC module/bdev/raid/bdev_raid_sb.o 00:02:33.872 CC module/bdev/split/vbdev_split.o 00:02:33.872 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:33.872 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:34.130 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:34.130 LIB libspdk_bdev_lvol.a 00:02:34.130 CC module/bdev/split/vbdev_split_rpc.o 00:02:34.130 CC module/bdev/aio/bdev_aio.o 00:02:34.130 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:34.130 SO libspdk_bdev_lvol.so.6.0 00:02:34.130 LIB libspdk_bdev_passthru.a 00:02:34.130 LIB libspdk_bdev_zone_block.a 00:02:34.130 SO libspdk_bdev_passthru.so.6.0 00:02:34.130 SO libspdk_bdev_zone_block.so.6.0 00:02:34.130 SYMLINK libspdk_bdev_lvol.so 00:02:34.130 CC module/bdev/aio/bdev_aio_rpc.o 00:02:34.395 CC module/bdev/ftl/bdev_ftl.o 00:02:34.395 SYMLINK libspdk_bdev_passthru.so 00:02:34.395 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:34.395 LIB libspdk_bdev_split.a 00:02:34.395 CC module/bdev/iscsi/bdev_iscsi.o 00:02:34.395 SYMLINK libspdk_bdev_zone_block.so 00:02:34.395 SO libspdk_bdev_split.so.6.0 00:02:34.395 CC module/bdev/raid/raid0.o 00:02:34.395 SYMLINK libspdk_bdev_split.so 00:02:34.395 CC module/bdev/raid/raid1.o 00:02:34.395 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:34.395 LIB libspdk_bdev_aio.a 00:02:34.395 CC module/bdev/raid/concat.o 00:02:34.395 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:34.395 SO libspdk_bdev_aio.so.6.0 00:02:34.395 LIB libspdk_bdev_ftl.a 00:02:34.652 SO libspdk_bdev_ftl.so.6.0 00:02:34.652 SYMLINK libspdk_bdev_aio.so 00:02:34.652 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:34.652 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:34.652 CC module/bdev/raid/raid5f.o 00:02:34.652 SYMLINK libspdk_bdev_ftl.so 00:02:34.652 LIB libspdk_bdev_iscsi.a 00:02:34.652 SO libspdk_bdev_iscsi.so.6.0 00:02:34.652 SYMLINK libspdk_bdev_iscsi.so 00:02:34.909 LIB libspdk_bdev_virtio.a 00:02:35.166 SO libspdk_bdev_virtio.so.6.0 00:02:35.166 LIB libspdk_bdev_raid.a 00:02:35.166 SYMLINK libspdk_bdev_virtio.so 00:02:35.166 SO libspdk_bdev_raid.so.6.0 00:02:35.166 SYMLINK libspdk_bdev_raid.so 00:02:36.103 LIB libspdk_bdev_nvme.a 00:02:36.364 SO libspdk_bdev_nvme.so.7.1 00:02:36.364 SYMLINK libspdk_bdev_nvme.so 00:02:36.926 CC module/event/subsystems/iobuf/iobuf.o 00:02:36.926 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:36.926 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:36.926 CC module/event/subsystems/sock/sock.o 00:02:36.926 CC module/event/subsystems/vmd/vmd.o 00:02:36.926 CC module/event/subsystems/scheduler/scheduler.o 00:02:36.926 CC module/event/subsystems/fsdev/fsdev.o 00:02:36.926 CC module/event/subsystems/keyring/keyring.o 00:02:36.926 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:36.926 LIB libspdk_event_scheduler.a 00:02:36.926 LIB libspdk_event_keyring.a 00:02:36.926 LIB libspdk_event_iobuf.a 00:02:36.926 LIB libspdk_event_vhost_blk.a 00:02:36.926 LIB libspdk_event_sock.a 00:02:36.926 SO libspdk_event_scheduler.so.4.0 00:02:36.926 SO libspdk_event_keyring.so.1.0 00:02:36.926 LIB libspdk_event_fsdev.a 00:02:36.926 SO libspdk_event_vhost_blk.so.3.0 00:02:36.926 LIB libspdk_event_vmd.a 00:02:36.926 SO libspdk_event_sock.so.5.0 00:02:36.926 SO libspdk_event_iobuf.so.3.0 00:02:36.926 SO libspdk_event_fsdev.so.1.0 00:02:36.926 SO libspdk_event_vmd.so.6.0 00:02:36.926 SYMLINK libspdk_event_keyring.so 00:02:36.926 SYMLINK libspdk_event_scheduler.so 00:02:36.926 SYMLINK libspdk_event_vhost_blk.so 00:02:36.926 SYMLINK libspdk_event_iobuf.so 00:02:36.926 SYMLINK libspdk_event_sock.so 00:02:36.926 SYMLINK libspdk_event_fsdev.so 00:02:36.926 SYMLINK libspdk_event_vmd.so 00:02:37.183 CC module/event/subsystems/accel/accel.o 00:02:37.440 LIB libspdk_event_accel.a 00:02:37.440 SO libspdk_event_accel.so.6.0 00:02:37.440 SYMLINK libspdk_event_accel.so 00:02:37.697 CC module/event/subsystems/bdev/bdev.o 00:02:37.697 LIB libspdk_event_bdev.a 00:02:37.954 SO libspdk_event_bdev.so.6.0 00:02:37.954 SYMLINK libspdk_event_bdev.so 00:02:37.954 CC module/event/subsystems/scsi/scsi.o 00:02:37.954 CC module/event/subsystems/nbd/nbd.o 00:02:37.954 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:37.954 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:37.954 CC module/event/subsystems/ublk/ublk.o 00:02:38.227 LIB libspdk_event_ublk.a 00:02:38.227 LIB libspdk_event_nbd.a 00:02:38.227 LIB libspdk_event_scsi.a 00:02:38.227 SO libspdk_event_ublk.so.3.0 00:02:38.227 SO libspdk_event_nbd.so.6.0 00:02:38.227 SO libspdk_event_scsi.so.6.0 00:02:38.227 SYMLINK libspdk_event_nbd.so 00:02:38.227 SYMLINK libspdk_event_ublk.so 00:02:38.227 LIB libspdk_event_nvmf.a 00:02:38.227 SYMLINK libspdk_event_scsi.so 00:02:38.227 SO libspdk_event_nvmf.so.6.0 00:02:38.488 SYMLINK libspdk_event_nvmf.so 00:02:38.488 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:38.488 CC module/event/subsystems/iscsi/iscsi.o 00:02:38.488 LIB libspdk_event_vhost_scsi.a 00:02:38.488 SO libspdk_event_vhost_scsi.so.3.0 00:02:38.488 LIB libspdk_event_iscsi.a 00:02:38.744 SO libspdk_event_iscsi.so.6.0 00:02:38.744 SYMLINK libspdk_event_vhost_scsi.so 00:02:38.744 SYMLINK libspdk_event_iscsi.so 00:02:38.744 SO libspdk.so.6.0 00:02:38.744 SYMLINK libspdk.so 00:02:39.001 CC app/spdk_lspci/spdk_lspci.o 00:02:39.001 CXX app/trace/trace.o 00:02:39.001 CC app/spdk_nvme_identify/identify.o 00:02:39.001 CC app/trace_record/trace_record.o 00:02:39.001 CC app/spdk_nvme_perf/perf.o 00:02:39.001 CC app/spdk_tgt/spdk_tgt.o 00:02:39.001 CC app/nvmf_tgt/nvmf_main.o 00:02:39.001 CC examples/util/zipf/zipf.o 00:02:39.001 CC test/thread/poller_perf/poller_perf.o 00:02:39.001 CC app/iscsi_tgt/iscsi_tgt.o 00:02:39.001 LINK spdk_lspci 00:02:39.258 LINK poller_perf 00:02:39.258 LINK nvmf_tgt 00:02:39.258 LINK spdk_tgt 00:02:39.258 LINK zipf 00:02:39.258 LINK spdk_trace_record 00:02:39.258 LINK iscsi_tgt 00:02:39.258 LINK spdk_trace 00:02:39.516 CC app/spdk_nvme_discover/discovery_aer.o 00:02:39.516 CC app/spdk_top/spdk_top.o 00:02:39.516 CC test/dma/test_dma/test_dma.o 00:02:39.516 CC test/app/bdev_svc/bdev_svc.o 00:02:39.516 CC examples/ioat/perf/perf.o 00:02:39.516 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:39.516 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:39.516 CC test/app/histogram_perf/histogram_perf.o 00:02:39.516 LINK spdk_nvme_discover 00:02:39.516 LINK bdev_svc 00:02:39.773 LINK ioat_perf 00:02:39.773 LINK histogram_perf 00:02:39.773 LINK spdk_nvme_identify 00:02:39.773 CC test/app/jsoncat/jsoncat.o 00:02:39.773 CC examples/ioat/verify/verify.o 00:02:39.773 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:39.773 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:39.773 LINK spdk_nvme_perf 00:02:40.030 LINK test_dma 00:02:40.030 CC app/spdk_dd/spdk_dd.o 00:02:40.030 LINK jsoncat 00:02:40.030 LINK nvme_fuzz 00:02:40.030 LINK verify 00:02:40.030 CC test/app/stub/stub.o 00:02:40.030 LINK spdk_top 00:02:40.030 CC app/fio/nvme/fio_plugin.o 00:02:40.287 CC app/vhost/vhost.o 00:02:40.287 CC app/fio/bdev/fio_plugin.o 00:02:40.287 LINK vhost_fuzz 00:02:40.287 LINK stub 00:02:40.287 CC examples/vmd/lsvmd/lsvmd.o 00:02:40.287 LINK spdk_dd 00:02:40.287 CC examples/idxd/perf/perf.o 00:02:40.287 TEST_HEADER include/spdk/accel.h 00:02:40.287 TEST_HEADER include/spdk/accel_module.h 00:02:40.287 LINK vhost 00:02:40.287 TEST_HEADER include/spdk/assert.h 00:02:40.287 TEST_HEADER include/spdk/barrier.h 00:02:40.287 TEST_HEADER include/spdk/base64.h 00:02:40.287 TEST_HEADER include/spdk/bdev.h 00:02:40.287 TEST_HEADER include/spdk/bdev_module.h 00:02:40.287 TEST_HEADER include/spdk/bdev_zone.h 00:02:40.287 TEST_HEADER include/spdk/bit_array.h 00:02:40.287 TEST_HEADER include/spdk/bit_pool.h 00:02:40.287 TEST_HEADER include/spdk/blob_bdev.h 00:02:40.287 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:40.287 TEST_HEADER include/spdk/blobfs.h 00:02:40.287 TEST_HEADER include/spdk/blob.h 00:02:40.287 TEST_HEADER include/spdk/conf.h 00:02:40.287 TEST_HEADER include/spdk/config.h 00:02:40.287 TEST_HEADER include/spdk/cpuset.h 00:02:40.287 TEST_HEADER include/spdk/crc16.h 00:02:40.287 TEST_HEADER include/spdk/crc32.h 00:02:40.287 TEST_HEADER include/spdk/crc64.h 00:02:40.287 TEST_HEADER include/spdk/dif.h 00:02:40.287 TEST_HEADER include/spdk/dma.h 00:02:40.287 TEST_HEADER include/spdk/endian.h 00:02:40.287 TEST_HEADER include/spdk/env_dpdk.h 00:02:40.287 TEST_HEADER include/spdk/env.h 00:02:40.287 LINK lsvmd 00:02:40.287 TEST_HEADER include/spdk/event.h 00:02:40.287 TEST_HEADER include/spdk/fd_group.h 00:02:40.287 TEST_HEADER include/spdk/fd.h 00:02:40.287 TEST_HEADER include/spdk/file.h 00:02:40.287 TEST_HEADER include/spdk/fsdev.h 00:02:40.287 TEST_HEADER include/spdk/fsdev_module.h 00:02:40.287 TEST_HEADER include/spdk/ftl.h 00:02:40.287 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:40.287 TEST_HEADER include/spdk/gpt_spec.h 00:02:40.287 TEST_HEADER include/spdk/hexlify.h 00:02:40.287 TEST_HEADER include/spdk/histogram_data.h 00:02:40.287 TEST_HEADER include/spdk/idxd.h 00:02:40.287 TEST_HEADER include/spdk/idxd_spec.h 00:02:40.287 TEST_HEADER include/spdk/init.h 00:02:40.287 TEST_HEADER include/spdk/ioat.h 00:02:40.287 TEST_HEADER include/spdk/ioat_spec.h 00:02:40.287 TEST_HEADER include/spdk/iscsi_spec.h 00:02:40.287 TEST_HEADER include/spdk/json.h 00:02:40.287 CC examples/vmd/led/led.o 00:02:40.287 TEST_HEADER include/spdk/jsonrpc.h 00:02:40.287 TEST_HEADER include/spdk/keyring.h 00:02:40.544 TEST_HEADER include/spdk/keyring_module.h 00:02:40.544 TEST_HEADER include/spdk/likely.h 00:02:40.544 TEST_HEADER include/spdk/log.h 00:02:40.544 TEST_HEADER include/spdk/lvol.h 00:02:40.544 TEST_HEADER include/spdk/md5.h 00:02:40.544 TEST_HEADER include/spdk/memory.h 00:02:40.544 TEST_HEADER include/spdk/mmio.h 00:02:40.544 TEST_HEADER include/spdk/nbd.h 00:02:40.544 TEST_HEADER include/spdk/net.h 00:02:40.544 TEST_HEADER include/spdk/notify.h 00:02:40.544 TEST_HEADER include/spdk/nvme.h 00:02:40.544 TEST_HEADER include/spdk/nvme_intel.h 00:02:40.544 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:40.544 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:40.544 TEST_HEADER include/spdk/nvme_spec.h 00:02:40.544 TEST_HEADER include/spdk/nvme_zns.h 00:02:40.544 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:40.544 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:40.544 TEST_HEADER include/spdk/nvmf.h 00:02:40.544 TEST_HEADER include/spdk/nvmf_spec.h 00:02:40.544 TEST_HEADER include/spdk/nvmf_transport.h 00:02:40.544 TEST_HEADER include/spdk/opal.h 00:02:40.544 TEST_HEADER include/spdk/opal_spec.h 00:02:40.544 TEST_HEADER include/spdk/pci_ids.h 00:02:40.544 TEST_HEADER include/spdk/pipe.h 00:02:40.544 TEST_HEADER include/spdk/queue.h 00:02:40.544 TEST_HEADER include/spdk/reduce.h 00:02:40.544 TEST_HEADER include/spdk/rpc.h 00:02:40.544 TEST_HEADER include/spdk/scheduler.h 00:02:40.544 TEST_HEADER include/spdk/scsi.h 00:02:40.544 TEST_HEADER include/spdk/scsi_spec.h 00:02:40.544 TEST_HEADER include/spdk/sock.h 00:02:40.544 TEST_HEADER include/spdk/stdinc.h 00:02:40.544 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:40.544 TEST_HEADER include/spdk/string.h 00:02:40.544 TEST_HEADER include/spdk/thread.h 00:02:40.544 TEST_HEADER include/spdk/trace.h 00:02:40.544 TEST_HEADER include/spdk/trace_parser.h 00:02:40.544 TEST_HEADER include/spdk/tree.h 00:02:40.544 TEST_HEADER include/spdk/ublk.h 00:02:40.544 TEST_HEADER include/spdk/util.h 00:02:40.544 TEST_HEADER include/spdk/uuid.h 00:02:40.544 TEST_HEADER include/spdk/version.h 00:02:40.544 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:40.544 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:40.544 TEST_HEADER include/spdk/vhost.h 00:02:40.544 TEST_HEADER include/spdk/vmd.h 00:02:40.544 TEST_HEADER include/spdk/xor.h 00:02:40.544 TEST_HEADER include/spdk/zipf.h 00:02:40.544 CXX test/cpp_headers/accel.o 00:02:40.544 CXX test/cpp_headers/accel_module.o 00:02:40.544 LINK led 00:02:40.544 LINK idxd_perf 00:02:40.544 CXX test/cpp_headers/assert.o 00:02:40.544 LINK interrupt_tgt 00:02:40.801 CC test/env/mem_callbacks/mem_callbacks.o 00:02:40.801 LINK spdk_bdev 00:02:40.801 CXX test/cpp_headers/barrier.o 00:02:40.801 CC examples/thread/thread/thread_ex.o 00:02:40.801 LINK spdk_nvme 00:02:40.801 CXX test/cpp_headers/base64.o 00:02:40.801 CC test/env/vtophys/vtophys.o 00:02:40.801 CXX test/cpp_headers/bdev.o 00:02:40.801 CXX test/cpp_headers/bdev_module.o 00:02:40.801 CXX test/cpp_headers/bdev_zone.o 00:02:40.801 CXX test/cpp_headers/bit_array.o 00:02:40.801 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:40.801 LINK vtophys 00:02:40.801 CC test/env/memory/memory_ut.o 00:02:40.801 LINK thread 00:02:41.058 CXX test/cpp_headers/blob_bdev.o 00:02:41.058 CXX test/cpp_headers/bit_pool.o 00:02:41.058 CXX test/cpp_headers/blobfs_bdev.o 00:02:41.058 CXX test/cpp_headers/blobfs.o 00:02:41.058 LINK env_dpdk_post_init 00:02:41.058 CXX test/cpp_headers/blob.o 00:02:41.058 CXX test/cpp_headers/conf.o 00:02:41.058 CXX test/cpp_headers/config.o 00:02:41.058 CXX test/cpp_headers/cpuset.o 00:02:41.058 LINK mem_callbacks 00:02:41.058 CXX test/cpp_headers/crc16.o 00:02:41.317 CXX test/cpp_headers/crc32.o 00:02:41.317 CC examples/sock/hello_world/hello_sock.o 00:02:41.317 CC test/env/pci/pci_ut.o 00:02:41.317 CXX test/cpp_headers/crc64.o 00:02:41.317 CC test/event/event_perf/event_perf.o 00:02:41.317 LINK iscsi_fuzz 00:02:41.317 CC test/event/reactor_perf/reactor_perf.o 00:02:41.317 CC test/event/reactor/reactor.o 00:02:41.317 CC test/event/app_repeat/app_repeat.o 00:02:41.607 CXX test/cpp_headers/dif.o 00:02:41.607 LINK event_perf 00:02:41.607 CC test/event/scheduler/scheduler.o 00:02:41.607 LINK reactor 00:02:41.607 LINK hello_sock 00:02:41.607 LINK reactor_perf 00:02:41.607 CXX test/cpp_headers/dma.o 00:02:41.607 LINK app_repeat 00:02:41.607 CXX test/cpp_headers/endian.o 00:02:41.607 CXX test/cpp_headers/env_dpdk.o 00:02:41.607 CXX test/cpp_headers/env.o 00:02:41.607 CXX test/cpp_headers/event.o 00:02:41.607 LINK scheduler 00:02:41.607 LINK pci_ut 00:02:41.607 CXX test/cpp_headers/fd_group.o 00:02:41.607 CXX test/cpp_headers/fd.o 00:02:41.876 CXX test/cpp_headers/file.o 00:02:41.876 CXX test/cpp_headers/fsdev.o 00:02:41.876 LINK memory_ut 00:02:41.876 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:41.876 CXX test/cpp_headers/fsdev_module.o 00:02:41.876 CC examples/accel/perf/accel_perf.o 00:02:41.876 CC examples/nvme/hello_world/hello_world.o 00:02:41.876 CC examples/nvme/reconnect/reconnect.o 00:02:41.876 CC examples/blob/hello_world/hello_blob.o 00:02:41.876 CC test/nvme/aer/aer.o 00:02:41.876 CC examples/blob/cli/blobcli.o 00:02:41.876 CC test/rpc_client/rpc_client_test.o 00:02:42.133 CXX test/cpp_headers/ftl.o 00:02:42.133 LINK hello_fsdev 00:02:42.133 CC test/accel/dif/dif.o 00:02:42.133 LINK hello_world 00:02:42.133 LINK hello_blob 00:02:42.133 LINK rpc_client_test 00:02:42.133 CXX test/cpp_headers/fuse_dispatcher.o 00:02:42.133 LINK aer 00:02:42.391 LINK reconnect 00:02:42.391 CXX test/cpp_headers/gpt_spec.o 00:02:42.391 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:42.391 CC examples/nvme/hotplug/hotplug.o 00:02:42.391 CC examples/nvme/arbitration/arbitration.o 00:02:42.391 LINK accel_perf 00:02:42.391 CC test/blobfs/mkfs/mkfs.o 00:02:42.391 LINK blobcli 00:02:42.391 CXX test/cpp_headers/hexlify.o 00:02:42.391 CC test/nvme/reset/reset.o 00:02:42.391 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:42.391 CXX test/cpp_headers/histogram_data.o 00:02:42.649 LINK hotplug 00:02:42.649 LINK arbitration 00:02:42.649 LINK mkfs 00:02:42.649 CXX test/cpp_headers/idxd.o 00:02:42.649 LINK cmb_copy 00:02:42.649 CXX test/cpp_headers/idxd_spec.o 00:02:42.649 LINK reset 00:02:42.649 CXX test/cpp_headers/init.o 00:02:42.649 LINK nvme_manage 00:02:42.649 CC examples/bdev/hello_world/hello_bdev.o 00:02:42.649 CC test/lvol/esnap/esnap.o 00:02:42.649 LINK dif 00:02:42.907 CXX test/cpp_headers/ioat.o 00:02:42.907 CXX test/cpp_headers/ioat_spec.o 00:02:42.907 CC examples/nvme/abort/abort.o 00:02:42.907 CXX test/cpp_headers/iscsi_spec.o 00:02:42.907 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:42.907 CXX test/cpp_headers/json.o 00:02:42.907 CC test/nvme/sgl/sgl.o 00:02:42.907 CC test/nvme/e2edp/nvme_dp.o 00:02:42.907 CXX test/cpp_headers/jsonrpc.o 00:02:42.907 CXX test/cpp_headers/keyring.o 00:02:42.907 LINK hello_bdev 00:02:42.907 CXX test/cpp_headers/keyring_module.o 00:02:43.165 LINK pmr_persistence 00:02:43.165 CXX test/cpp_headers/likely.o 00:02:43.165 CXX test/cpp_headers/log.o 00:02:43.165 CXX test/cpp_headers/lvol.o 00:02:43.165 CC test/bdev/bdevio/bdevio.o 00:02:43.165 CXX test/cpp_headers/md5.o 00:02:43.165 LINK abort 00:02:43.165 LINK sgl 00:02:43.165 LINK nvme_dp 00:02:43.165 CC examples/bdev/bdevperf/bdevperf.o 00:02:43.165 CXX test/cpp_headers/memory.o 00:02:43.165 CXX test/cpp_headers/mmio.o 00:02:43.428 CXX test/cpp_headers/nbd.o 00:02:43.428 CXX test/cpp_headers/net.o 00:02:43.428 CXX test/cpp_headers/notify.o 00:02:43.428 CXX test/cpp_headers/nvme.o 00:02:43.428 CC test/nvme/overhead/overhead.o 00:02:43.428 CXX test/cpp_headers/nvme_intel.o 00:02:43.428 CC test/nvme/err_injection/err_injection.o 00:02:43.428 CXX test/cpp_headers/nvme_ocssd.o 00:02:43.428 CC test/nvme/startup/startup.o 00:02:43.428 CC test/nvme/reserve/reserve.o 00:02:43.686 LINK bdevio 00:02:43.686 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:43.686 LINK err_injection 00:02:43.686 CC test/nvme/simple_copy/simple_copy.o 00:02:43.686 CXX test/cpp_headers/nvme_spec.o 00:02:43.686 LINK overhead 00:02:43.686 LINK startup 00:02:43.686 LINK reserve 00:02:43.686 CXX test/cpp_headers/nvme_zns.o 00:02:43.686 CXX test/cpp_headers/nvmf_cmd.o 00:02:43.686 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:43.686 CXX test/cpp_headers/nvmf.o 00:02:43.686 CXX test/cpp_headers/nvmf_spec.o 00:02:43.944 CC test/nvme/connect_stress/connect_stress.o 00:02:43.944 LINK simple_copy 00:02:43.944 CXX test/cpp_headers/nvmf_transport.o 00:02:43.944 CC test/nvme/boot_partition/boot_partition.o 00:02:43.944 CXX test/cpp_headers/opal.o 00:02:43.944 CXX test/cpp_headers/opal_spec.o 00:02:43.944 CXX test/cpp_headers/pci_ids.o 00:02:43.944 CXX test/cpp_headers/pipe.o 00:02:43.944 LINK boot_partition 00:02:43.944 LINK connect_stress 00:02:43.944 CXX test/cpp_headers/queue.o 00:02:44.202 CC test/nvme/compliance/nvme_compliance.o 00:02:44.202 CXX test/cpp_headers/reduce.o 00:02:44.202 CXX test/cpp_headers/rpc.o 00:02:44.202 LINK bdevperf 00:02:44.202 CXX test/cpp_headers/scheduler.o 00:02:44.202 CXX test/cpp_headers/scsi.o 00:02:44.202 CC test/nvme/fused_ordering/fused_ordering.o 00:02:44.202 CXX test/cpp_headers/scsi_spec.o 00:02:44.202 CXX test/cpp_headers/sock.o 00:02:44.202 CXX test/cpp_headers/stdinc.o 00:02:44.202 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:44.202 CXX test/cpp_headers/string.o 00:02:44.460 CXX test/cpp_headers/thread.o 00:02:44.460 CC test/nvme/fdp/fdp.o 00:02:44.460 LINK fused_ordering 00:02:44.460 CXX test/cpp_headers/trace.o 00:02:44.460 CC examples/nvmf/nvmf/nvmf.o 00:02:44.460 LINK nvme_compliance 00:02:44.460 LINK doorbell_aers 00:02:44.460 CC test/nvme/cuse/cuse.o 00:02:44.460 CXX test/cpp_headers/trace_parser.o 00:02:44.460 CXX test/cpp_headers/tree.o 00:02:44.460 CXX test/cpp_headers/ublk.o 00:02:44.460 CXX test/cpp_headers/util.o 00:02:44.460 CXX test/cpp_headers/uuid.o 00:02:44.460 CXX test/cpp_headers/version.o 00:02:44.460 CXX test/cpp_headers/vfio_user_pci.o 00:02:44.460 CXX test/cpp_headers/vfio_user_spec.o 00:02:44.460 CXX test/cpp_headers/vhost.o 00:02:44.718 CXX test/cpp_headers/vmd.o 00:02:44.718 CXX test/cpp_headers/xor.o 00:02:44.718 CXX test/cpp_headers/zipf.o 00:02:44.718 LINK fdp 00:02:44.718 LINK nvmf 00:02:45.649 LINK cuse 00:02:48.177 LINK esnap 00:02:48.177 00:02:48.177 real 1m15.013s 00:02:48.177 user 6m41.984s 00:02:48.177 sys 1m15.454s 00:02:48.177 19:43:38 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:48.177 19:43:38 make -- common/autotest_common.sh@10 -- $ set +x 00:02:48.177 ************************************ 00:02:48.177 END TEST make 00:02:48.177 ************************************ 00:02:48.177 19:43:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:48.177 19:43:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:48.177 19:43:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:48.177 19:43:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.177 19:43:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:02:48.177 19:43:38 -- pm/common@44 -- $ pid=5027 00:02:48.177 19:43:38 -- pm/common@50 -- $ kill -TERM 5027 00:02:48.177 19:43:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.177 19:43:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:02:48.177 19:43:38 -- pm/common@44 -- $ pid=5028 00:02:48.177 19:43:38 -- pm/common@50 -- $ kill -TERM 5028 00:02:48.177 19:43:38 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:48.177 19:43:38 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:48.177 19:43:39 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:48.177 19:43:39 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:48.177 19:43:39 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:48.177 19:43:39 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:48.177 19:43:39 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:48.177 19:43:39 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:48.177 19:43:39 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:48.177 19:43:39 -- scripts/common.sh@336 -- # IFS=.-: 00:02:48.177 19:43:39 -- scripts/common.sh@336 -- # read -ra ver1 00:02:48.177 19:43:39 -- scripts/common.sh@337 -- # IFS=.-: 00:02:48.177 19:43:39 -- scripts/common.sh@337 -- # read -ra ver2 00:02:48.177 19:43:39 -- scripts/common.sh@338 -- # local 'op=<' 00:02:48.177 19:43:39 -- scripts/common.sh@340 -- # ver1_l=2 00:02:48.177 19:43:39 -- scripts/common.sh@341 -- # ver2_l=1 00:02:48.177 19:43:39 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:48.177 19:43:39 -- scripts/common.sh@344 -- # case "$op" in 00:02:48.177 19:43:39 -- scripts/common.sh@345 -- # : 1 00:02:48.177 19:43:39 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:48.177 19:43:39 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:48.177 19:43:39 -- scripts/common.sh@365 -- # decimal 1 00:02:48.177 19:43:39 -- scripts/common.sh@353 -- # local d=1 00:02:48.177 19:43:39 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:48.177 19:43:39 -- scripts/common.sh@355 -- # echo 1 00:02:48.177 19:43:39 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:48.177 19:43:39 -- scripts/common.sh@366 -- # decimal 2 00:02:48.177 19:43:39 -- scripts/common.sh@353 -- # local d=2 00:02:48.177 19:43:39 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:48.177 19:43:39 -- scripts/common.sh@355 -- # echo 2 00:02:48.177 19:43:39 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:48.177 19:43:39 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:48.177 19:43:39 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:48.177 19:43:39 -- scripts/common.sh@368 -- # return 0 00:02:48.177 19:43:39 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:48.177 19:43:39 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:48.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:48.177 --rc genhtml_branch_coverage=1 00:02:48.177 --rc genhtml_function_coverage=1 00:02:48.177 --rc genhtml_legend=1 00:02:48.177 --rc geninfo_all_blocks=1 00:02:48.177 --rc geninfo_unexecuted_blocks=1 00:02:48.177 00:02:48.177 ' 00:02:48.177 19:43:39 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:48.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:48.177 --rc genhtml_branch_coverage=1 00:02:48.177 --rc genhtml_function_coverage=1 00:02:48.177 --rc genhtml_legend=1 00:02:48.177 --rc geninfo_all_blocks=1 00:02:48.177 --rc geninfo_unexecuted_blocks=1 00:02:48.177 00:02:48.177 ' 00:02:48.177 19:43:39 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:48.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:48.177 --rc genhtml_branch_coverage=1 00:02:48.177 --rc genhtml_function_coverage=1 00:02:48.177 --rc genhtml_legend=1 00:02:48.177 --rc geninfo_all_blocks=1 00:02:48.177 --rc geninfo_unexecuted_blocks=1 00:02:48.177 00:02:48.177 ' 00:02:48.177 19:43:39 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:48.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:48.177 --rc genhtml_branch_coverage=1 00:02:48.177 --rc genhtml_function_coverage=1 00:02:48.177 --rc genhtml_legend=1 00:02:48.177 --rc geninfo_all_blocks=1 00:02:48.177 --rc geninfo_unexecuted_blocks=1 00:02:48.177 00:02:48.177 ' 00:02:48.177 19:43:39 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:48.434 19:43:39 -- nvmf/common.sh@7 -- # uname -s 00:02:48.434 19:43:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:48.434 19:43:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:48.434 19:43:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:48.434 19:43:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:48.434 19:43:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:48.434 19:43:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:48.434 19:43:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:48.434 19:43:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:48.434 19:43:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:48.434 19:43:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:48.434 19:43:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c43568d3-4192-481f-9cc6-13b2a52015b5 00:02:48.434 19:43:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=c43568d3-4192-481f-9cc6-13b2a52015b5 00:02:48.434 19:43:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:48.434 19:43:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:48.434 19:43:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:48.434 19:43:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:48.434 19:43:39 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:48.434 19:43:39 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:48.434 19:43:39 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:48.434 19:43:39 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:48.434 19:43:39 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:48.434 19:43:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.434 19:43:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.434 19:43:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.434 19:43:39 -- paths/export.sh@5 -- # export PATH 00:02:48.434 19:43:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.434 19:43:39 -- nvmf/common.sh@51 -- # : 0 00:02:48.434 19:43:39 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:48.434 19:43:39 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:48.434 19:43:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:48.434 19:43:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:48.434 19:43:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:48.434 19:43:39 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:48.434 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:48.434 19:43:39 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:48.434 19:43:39 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:48.434 19:43:39 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:48.434 19:43:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:48.434 19:43:39 -- spdk/autotest.sh@32 -- # uname -s 00:02:48.434 19:43:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:48.434 19:43:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:48.434 19:43:39 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:48.434 19:43:39 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:48.434 19:43:39 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:48.434 19:43:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:48.434 19:43:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:48.434 19:43:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:48.434 19:43:39 -- spdk/autotest.sh@48 -- # udevadm_pid=53800 00:02:48.434 19:43:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:48.434 19:43:39 -- pm/common@17 -- # local monitor 00:02:48.434 19:43:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.434 19:43:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.434 19:43:39 -- pm/common@25 -- # sleep 1 00:02:48.434 19:43:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:48.434 19:43:39 -- pm/common@21 -- # date +%s 00:02:48.434 19:43:39 -- pm/common@21 -- # date +%s 00:02:48.434 19:43:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732650219 00:02:48.434 19:43:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732650219 00:02:48.434 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732650219_collect-cpu-load.pm.log 00:02:48.434 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732650219_collect-vmstat.pm.log 00:02:49.362 19:43:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:49.362 19:43:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:49.362 19:43:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:49.362 19:43:40 -- common/autotest_common.sh@10 -- # set +x 00:02:49.362 19:43:40 -- spdk/autotest.sh@59 -- # create_test_list 00:02:49.362 19:43:40 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:49.362 19:43:40 -- common/autotest_common.sh@10 -- # set +x 00:02:49.362 19:43:40 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:49.362 19:43:40 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:49.362 19:43:40 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:02:49.363 19:43:40 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:49.363 19:43:40 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:02:49.363 19:43:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:49.363 19:43:40 -- common/autotest_common.sh@1457 -- # uname 00:02:49.363 19:43:40 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:49.363 19:43:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:49.363 19:43:40 -- common/autotest_common.sh@1477 -- # uname 00:02:49.363 19:43:40 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:49.363 19:43:40 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:49.363 19:43:40 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:49.620 lcov: LCOV version 1.15 00:02:49.620 19:43:40 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:04.482 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:04.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:19.339 19:44:08 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:19.339 19:44:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:19.339 19:44:08 -- common/autotest_common.sh@10 -- # set +x 00:03:19.339 19:44:08 -- spdk/autotest.sh@78 -- # rm -f 00:03:19.339 19:44:08 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:19.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:19.339 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:19.339 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:19.339 19:44:08 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:19.339 19:44:08 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:19.339 19:44:08 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:19.339 19:44:08 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:19.339 19:44:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:19.339 19:44:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:19.339 19:44:08 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:19.339 19:44:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:19.339 19:44:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:19.339 19:44:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:19.339 19:44:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:19.339 19:44:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:19.339 19:44:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:19.339 19:44:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:19.339 19:44:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:19.339 19:44:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:19.339 19:44:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:19.339 19:44:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:19.339 19:44:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:19.339 19:44:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:19.339 19:44:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:19.339 19:44:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:19.339 19:44:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:19.339 19:44:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:19.339 19:44:08 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:19.339 19:44:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:19.339 19:44:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:19.339 19:44:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:19.339 19:44:08 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:19.339 19:44:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:19.339 No valid GPT data, bailing 00:03:19.339 19:44:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:19.339 19:44:08 -- scripts/common.sh@394 -- # pt= 00:03:19.339 19:44:08 -- scripts/common.sh@395 -- # return 1 00:03:19.339 19:44:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:19.339 1+0 records in 00:03:19.339 1+0 records out 00:03:19.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0036687 s, 286 MB/s 00:03:19.339 19:44:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:19.339 19:44:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:19.339 19:44:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:19.339 19:44:08 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:19.339 19:44:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:19.339 No valid GPT data, bailing 00:03:19.339 19:44:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:19.339 19:44:08 -- scripts/common.sh@394 -- # pt= 00:03:19.339 19:44:08 -- scripts/common.sh@395 -- # return 1 00:03:19.339 19:44:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:19.339 1+0 records in 00:03:19.339 1+0 records out 00:03:19.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00299637 s, 350 MB/s 00:03:19.339 19:44:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:19.340 19:44:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:19.340 19:44:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:19.340 19:44:08 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:19.340 19:44:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:19.340 No valid GPT data, bailing 00:03:19.340 19:44:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:19.340 19:44:08 -- scripts/common.sh@394 -- # pt= 00:03:19.340 19:44:08 -- scripts/common.sh@395 -- # return 1 00:03:19.340 19:44:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:19.340 1+0 records in 00:03:19.340 1+0 records out 00:03:19.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473464 s, 221 MB/s 00:03:19.340 19:44:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:19.340 19:44:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:19.340 19:44:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:19.340 19:44:08 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:19.340 19:44:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:19.340 No valid GPT data, bailing 00:03:19.340 19:44:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:19.340 19:44:08 -- scripts/common.sh@394 -- # pt= 00:03:19.340 19:44:08 -- scripts/common.sh@395 -- # return 1 00:03:19.340 19:44:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:19.340 1+0 records in 00:03:19.340 1+0 records out 00:03:19.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00413039 s, 254 MB/s 00:03:19.340 19:44:08 -- spdk/autotest.sh@105 -- # sync 00:03:19.340 19:44:08 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:19.340 19:44:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:19.340 19:44:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:19.597 19:44:10 -- spdk/autotest.sh@111 -- # uname -s 00:03:19.597 19:44:10 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:19.597 19:44:10 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:19.597 19:44:10 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:20.165 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:20.165 Hugepages 00:03:20.165 node hugesize free / total 00:03:20.165 node0 1048576kB 0 / 0 00:03:20.165 node0 2048kB 0 / 0 00:03:20.165 00:03:20.165 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:20.165 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:20.165 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:20.422 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:20.422 19:44:11 -- spdk/autotest.sh@117 -- # uname -s 00:03:20.422 19:44:11 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:20.422 19:44:11 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:20.422 19:44:11 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:20.680 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:20.938 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:20.938 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:20.938 19:44:11 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:21.873 19:44:12 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:21.873 19:44:12 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:21.873 19:44:12 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:21.873 19:44:12 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:21.873 19:44:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:21.873 19:44:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:21.873 19:44:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:21.873 19:44:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:21.873 19:44:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:22.131 19:44:12 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:22.131 19:44:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:22.131 19:44:12 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:22.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:22.390 Waiting for block devices as requested 00:03:22.390 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:22.390 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:22.390 19:44:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:22.390 19:44:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:22.390 19:44:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:22.390 19:44:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:22.390 19:44:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:22.390 19:44:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:22.390 19:44:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:22.390 19:44:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:22.390 19:44:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:22.390 19:44:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:22.390 19:44:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:22.390 19:44:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:22.390 19:44:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:22.648 19:44:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:22.648 19:44:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:22.648 19:44:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:22.648 19:44:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:22.648 19:44:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:22.648 19:44:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:22.648 19:44:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:22.648 19:44:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:22.648 19:44:13 -- common/autotest_common.sh@1543 -- # continue 00:03:22.648 19:44:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:22.648 19:44:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:22.648 19:44:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:22.648 19:44:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:22.648 19:44:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:22.648 19:44:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:22.648 19:44:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:22.648 19:44:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:22.648 19:44:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:22.648 19:44:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:22.648 19:44:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:22.648 19:44:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:22.648 19:44:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:22.648 19:44:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:22.648 19:44:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:22.648 19:44:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:22.648 19:44:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:22.648 19:44:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:22.648 19:44:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:22.648 19:44:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:22.648 19:44:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:22.648 19:44:13 -- common/autotest_common.sh@1543 -- # continue 00:03:22.648 19:44:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:22.648 19:44:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:22.648 19:44:13 -- common/autotest_common.sh@10 -- # set +x 00:03:22.648 19:44:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:22.648 19:44:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:22.648 19:44:13 -- common/autotest_common.sh@10 -- # set +x 00:03:22.648 19:44:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:23.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:23.215 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:23.215 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:23.215 19:44:14 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:23.215 19:44:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:23.215 19:44:14 -- common/autotest_common.sh@10 -- # set +x 00:03:23.215 19:44:14 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:23.215 19:44:14 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:23.215 19:44:14 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:23.215 19:44:14 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:23.215 19:44:14 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:23.215 19:44:14 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:23.215 19:44:14 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:23.215 19:44:14 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:23.215 19:44:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:23.215 19:44:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:23.215 19:44:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:23.215 19:44:14 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:23.215 19:44:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:23.473 19:44:14 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:23.473 19:44:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:23.473 19:44:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:23.473 19:44:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:23.473 19:44:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:23.473 19:44:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:23.473 19:44:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:23.473 19:44:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:23.473 19:44:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:23.473 19:44:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:23.473 19:44:14 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:23.473 19:44:14 -- common/autotest_common.sh@1572 -- # return 0 00:03:23.473 19:44:14 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:23.473 19:44:14 -- common/autotest_common.sh@1580 -- # return 0 00:03:23.473 19:44:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:23.473 19:44:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:23.473 19:44:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:23.473 19:44:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:23.473 19:44:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:23.473 19:44:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:23.473 19:44:14 -- common/autotest_common.sh@10 -- # set +x 00:03:23.473 19:44:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:23.473 19:44:14 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:23.473 19:44:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:23.473 19:44:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:23.473 19:44:14 -- common/autotest_common.sh@10 -- # set +x 00:03:23.473 ************************************ 00:03:23.473 START TEST env 00:03:23.473 ************************************ 00:03:23.473 19:44:14 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:23.473 * Looking for test storage... 00:03:23.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:23.473 19:44:14 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:23.473 19:44:14 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:23.473 19:44:14 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:23.473 19:44:14 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:23.473 19:44:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:23.473 19:44:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:23.473 19:44:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:23.473 19:44:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:23.473 19:44:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:23.473 19:44:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:23.473 19:44:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:23.473 19:44:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:23.473 19:44:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:23.473 19:44:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:23.473 19:44:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:23.473 19:44:14 env -- scripts/common.sh@344 -- # case "$op" in 00:03:23.473 19:44:14 env -- scripts/common.sh@345 -- # : 1 00:03:23.473 19:44:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:23.474 19:44:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:23.474 19:44:14 env -- scripts/common.sh@365 -- # decimal 1 00:03:23.474 19:44:14 env -- scripts/common.sh@353 -- # local d=1 00:03:23.474 19:44:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:23.474 19:44:14 env -- scripts/common.sh@355 -- # echo 1 00:03:23.474 19:44:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:23.474 19:44:14 env -- scripts/common.sh@366 -- # decimal 2 00:03:23.474 19:44:14 env -- scripts/common.sh@353 -- # local d=2 00:03:23.474 19:44:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:23.474 19:44:14 env -- scripts/common.sh@355 -- # echo 2 00:03:23.474 19:44:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:23.474 19:44:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:23.474 19:44:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:23.474 19:44:14 env -- scripts/common.sh@368 -- # return 0 00:03:23.474 19:44:14 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:23.474 19:44:14 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:23.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.474 --rc genhtml_branch_coverage=1 00:03:23.474 --rc genhtml_function_coverage=1 00:03:23.474 --rc genhtml_legend=1 00:03:23.474 --rc geninfo_all_blocks=1 00:03:23.474 --rc geninfo_unexecuted_blocks=1 00:03:23.474 00:03:23.474 ' 00:03:23.474 19:44:14 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:23.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.474 --rc genhtml_branch_coverage=1 00:03:23.474 --rc genhtml_function_coverage=1 00:03:23.474 --rc genhtml_legend=1 00:03:23.474 --rc geninfo_all_blocks=1 00:03:23.474 --rc geninfo_unexecuted_blocks=1 00:03:23.474 00:03:23.474 ' 00:03:23.474 19:44:14 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:23.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.474 --rc genhtml_branch_coverage=1 00:03:23.474 --rc genhtml_function_coverage=1 00:03:23.474 --rc genhtml_legend=1 00:03:23.474 --rc geninfo_all_blocks=1 00:03:23.474 --rc geninfo_unexecuted_blocks=1 00:03:23.474 00:03:23.474 ' 00:03:23.474 19:44:14 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:23.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.474 --rc genhtml_branch_coverage=1 00:03:23.474 --rc genhtml_function_coverage=1 00:03:23.474 --rc genhtml_legend=1 00:03:23.474 --rc geninfo_all_blocks=1 00:03:23.474 --rc geninfo_unexecuted_blocks=1 00:03:23.474 00:03:23.474 ' 00:03:23.474 19:44:14 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:23.474 19:44:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:23.474 19:44:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:23.474 19:44:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:23.474 ************************************ 00:03:23.474 START TEST env_memory 00:03:23.474 ************************************ 00:03:23.474 19:44:14 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:23.474 00:03:23.474 00:03:23.474 CUnit - A unit testing framework for C - Version 2.1-3 00:03:23.474 http://cunit.sourceforge.net/ 00:03:23.474 00:03:23.474 00:03:23.474 Suite: memory 00:03:23.732 Test: alloc and free memory map ...[2024-11-26 19:44:14.444494] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:23.732 passed 00:03:23.732 Test: mem map translation ...[2024-11-26 19:44:14.485970] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:23.732 [2024-11-26 19:44:14.486044] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:23.732 [2024-11-26 19:44:14.486409] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:23.732 [2024-11-26 19:44:14.486445] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:23.732 passed 00:03:23.732 Test: mem map registration ...[2024-11-26 19:44:14.554860] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:23.732 [2024-11-26 19:44:14.554911] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:23.732 passed 00:03:23.732 Test: mem map adjacent registrations ...passed 00:03:23.732 00:03:23.732 Run Summary: Type Total Ran Passed Failed Inactive 00:03:23.732 suites 1 1 n/a 0 0 00:03:23.732 tests 4 4 4 0 0 00:03:23.732 asserts 152 152 152 0 n/a 00:03:23.732 00:03:23.732 Elapsed time = 0.239 seconds 00:03:23.732 00:03:23.732 real 0m0.283s 00:03:23.732 user 0m0.240s 00:03:23.732 sys 0m0.024s 00:03:23.732 19:44:14 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:23.732 19:44:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:23.732 ************************************ 00:03:23.732 END TEST env_memory 00:03:23.732 ************************************ 00:03:23.991 19:44:14 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:23.991 19:44:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:23.991 19:44:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:23.991 19:44:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:23.991 ************************************ 00:03:23.991 START TEST env_vtophys 00:03:23.991 ************************************ 00:03:23.991 19:44:14 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:23.991 EAL: lib.eal log level changed from notice to debug 00:03:23.991 EAL: Detected lcore 0 as core 0 on socket 0 00:03:23.991 EAL: Detected lcore 1 as core 0 on socket 0 00:03:23.991 EAL: Detected lcore 2 as core 0 on socket 0 00:03:23.991 EAL: Detected lcore 3 as core 0 on socket 0 00:03:23.991 EAL: Detected lcore 4 as core 0 on socket 0 00:03:23.991 EAL: Detected lcore 5 as core 0 on socket 0 00:03:23.991 EAL: Detected lcore 6 as core 0 on socket 0 00:03:23.991 EAL: Detected lcore 7 as core 0 on socket 0 00:03:23.991 EAL: Detected lcore 8 as core 0 on socket 0 00:03:23.991 EAL: Detected lcore 9 as core 0 on socket 0 00:03:23.991 EAL: Maximum logical cores by configuration: 128 00:03:23.991 EAL: Detected CPU lcores: 10 00:03:23.991 EAL: Detected NUMA nodes: 1 00:03:23.991 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:23.991 EAL: Detected shared linkage of DPDK 00:03:23.991 EAL: No shared files mode enabled, IPC will be disabled 00:03:23.991 EAL: Selected IOVA mode 'PA' 00:03:23.991 EAL: Probing VFIO support... 00:03:23.991 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:23.991 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:23.991 EAL: Ask a virtual area of 0x2e000 bytes 00:03:23.991 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:23.991 EAL: Setting up physically contiguous memory... 00:03:23.991 EAL: Setting maximum number of open files to 524288 00:03:23.991 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:23.991 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:23.991 EAL: Ask a virtual area of 0x61000 bytes 00:03:23.991 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:23.991 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:23.991 EAL: Ask a virtual area of 0x400000000 bytes 00:03:23.991 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:23.991 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:23.991 EAL: Ask a virtual area of 0x61000 bytes 00:03:23.991 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:23.991 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:23.991 EAL: Ask a virtual area of 0x400000000 bytes 00:03:23.991 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:23.991 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:23.991 EAL: Ask a virtual area of 0x61000 bytes 00:03:23.991 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:23.991 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:23.991 EAL: Ask a virtual area of 0x400000000 bytes 00:03:23.991 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:23.991 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:23.991 EAL: Ask a virtual area of 0x61000 bytes 00:03:23.991 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:23.991 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:23.991 EAL: Ask a virtual area of 0x400000000 bytes 00:03:23.991 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:23.991 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:23.991 EAL: Hugepages will be freed exactly as allocated. 00:03:23.991 EAL: No shared files mode enabled, IPC is disabled 00:03:23.991 EAL: No shared files mode enabled, IPC is disabled 00:03:23.991 EAL: TSC frequency is ~2600000 KHz 00:03:23.991 EAL: Main lcore 0 is ready (tid=7f8769216a40;cpuset=[0]) 00:03:23.991 EAL: Trying to obtain current memory policy. 00:03:23.991 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.991 EAL: Restoring previous memory policy: 0 00:03:23.991 EAL: request: mp_malloc_sync 00:03:23.991 EAL: No shared files mode enabled, IPC is disabled 00:03:23.991 EAL: Heap on socket 0 was expanded by 2MB 00:03:23.991 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:23.991 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:23.991 EAL: Mem event callback 'spdk:(nil)' registered 00:03:23.991 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:23.991 00:03:23.991 00:03:23.991 CUnit - A unit testing framework for C - Version 2.1-3 00:03:23.991 http://cunit.sourceforge.net/ 00:03:23.991 00:03:23.991 00:03:23.991 Suite: components_suite 00:03:24.558 Test: vtophys_malloc_test ...passed 00:03:24.558 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:24.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.558 EAL: Restoring previous memory policy: 4 00:03:24.558 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.558 EAL: request: mp_malloc_sync 00:03:24.558 EAL: No shared files mode enabled, IPC is disabled 00:03:24.558 EAL: Heap on socket 0 was expanded by 4MB 00:03:24.558 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.558 EAL: request: mp_malloc_sync 00:03:24.558 EAL: No shared files mode enabled, IPC is disabled 00:03:24.558 EAL: Heap on socket 0 was shrunk by 4MB 00:03:24.558 EAL: Trying to obtain current memory policy. 00:03:24.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.558 EAL: Restoring previous memory policy: 4 00:03:24.558 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.558 EAL: request: mp_malloc_sync 00:03:24.558 EAL: No shared files mode enabled, IPC is disabled 00:03:24.558 EAL: Heap on socket 0 was expanded by 6MB 00:03:24.558 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.558 EAL: request: mp_malloc_sync 00:03:24.558 EAL: No shared files mode enabled, IPC is disabled 00:03:24.558 EAL: Heap on socket 0 was shrunk by 6MB 00:03:24.558 EAL: Trying to obtain current memory policy. 00:03:24.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.558 EAL: Restoring previous memory policy: 4 00:03:24.558 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.558 EAL: request: mp_malloc_sync 00:03:24.558 EAL: No shared files mode enabled, IPC is disabled 00:03:24.558 EAL: Heap on socket 0 was expanded by 10MB 00:03:24.558 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.558 EAL: request: mp_malloc_sync 00:03:24.558 EAL: No shared files mode enabled, IPC is disabled 00:03:24.558 EAL: Heap on socket 0 was shrunk by 10MB 00:03:24.558 EAL: Trying to obtain current memory policy. 00:03:24.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.558 EAL: Restoring previous memory policy: 4 00:03:24.558 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.558 EAL: request: mp_malloc_sync 00:03:24.558 EAL: No shared files mode enabled, IPC is disabled 00:03:24.558 EAL: Heap on socket 0 was expanded by 18MB 00:03:24.558 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.558 EAL: request: mp_malloc_sync 00:03:24.558 EAL: No shared files mode enabled, IPC is disabled 00:03:24.558 EAL: Heap on socket 0 was shrunk by 18MB 00:03:24.558 EAL: Trying to obtain current memory policy. 00:03:24.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.558 EAL: Restoring previous memory policy: 4 00:03:24.558 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.558 EAL: request: mp_malloc_sync 00:03:24.558 EAL: No shared files mode enabled, IPC is disabled 00:03:24.558 EAL: Heap on socket 0 was expanded by 34MB 00:03:24.558 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.558 EAL: request: mp_malloc_sync 00:03:24.558 EAL: No shared files mode enabled, IPC is disabled 00:03:24.558 EAL: Heap on socket 0 was shrunk by 34MB 00:03:24.558 EAL: Trying to obtain current memory policy. 00:03:24.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.558 EAL: Restoring previous memory policy: 4 00:03:24.558 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.558 EAL: request: mp_malloc_sync 00:03:24.558 EAL: No shared files mode enabled, IPC is disabled 00:03:24.558 EAL: Heap on socket 0 was expanded by 66MB 00:03:24.558 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.558 EAL: request: mp_malloc_sync 00:03:24.558 EAL: No shared files mode enabled, IPC is disabled 00:03:24.558 EAL: Heap on socket 0 was shrunk by 66MB 00:03:24.817 EAL: Trying to obtain current memory policy. 00:03:24.818 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.818 EAL: Restoring previous memory policy: 4 00:03:24.818 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.818 EAL: request: mp_malloc_sync 00:03:24.818 EAL: No shared files mode enabled, IPC is disabled 00:03:24.818 EAL: Heap on socket 0 was expanded by 130MB 00:03:24.818 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.818 EAL: request: mp_malloc_sync 00:03:24.818 EAL: No shared files mode enabled, IPC is disabled 00:03:24.818 EAL: Heap on socket 0 was shrunk by 130MB 00:03:25.076 EAL: Trying to obtain current memory policy. 00:03:25.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.076 EAL: Restoring previous memory policy: 4 00:03:25.076 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.076 EAL: request: mp_malloc_sync 00:03:25.076 EAL: No shared files mode enabled, IPC is disabled 00:03:25.076 EAL: Heap on socket 0 was expanded by 258MB 00:03:25.335 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.335 EAL: request: mp_malloc_sync 00:03:25.335 EAL: No shared files mode enabled, IPC is disabled 00:03:25.335 EAL: Heap on socket 0 was shrunk by 258MB 00:03:25.593 EAL: Trying to obtain current memory policy. 00:03:25.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.851 EAL: Restoring previous memory policy: 4 00:03:25.851 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.851 EAL: request: mp_malloc_sync 00:03:25.851 EAL: No shared files mode enabled, IPC is disabled 00:03:25.851 EAL: Heap on socket 0 was expanded by 514MB 00:03:26.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.417 EAL: request: mp_malloc_sync 00:03:26.417 EAL: No shared files mode enabled, IPC is disabled 00:03:26.417 EAL: Heap on socket 0 was shrunk by 514MB 00:03:27.017 EAL: Trying to obtain current memory policy. 00:03:27.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.276 EAL: Restoring previous memory policy: 4 00:03:27.276 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.276 EAL: request: mp_malloc_sync 00:03:27.276 EAL: No shared files mode enabled, IPC is disabled 00:03:27.276 EAL: Heap on socket 0 was expanded by 1026MB 00:03:28.209 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.465 EAL: request: mp_malloc_sync 00:03:28.465 EAL: No shared files mode enabled, IPC is disabled 00:03:28.465 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:29.399 passed 00:03:29.399 00:03:29.399 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.399 suites 1 1 n/a 0 0 00:03:29.399 tests 2 2 2 0 0 00:03:29.399 asserts 5761 5761 5761 0 n/a 00:03:29.399 00:03:29.399 Elapsed time = 5.076 seconds 00:03:29.399 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.399 EAL: request: mp_malloc_sync 00:03:29.399 EAL: No shared files mode enabled, IPC is disabled 00:03:29.399 EAL: Heap on socket 0 was shrunk by 2MB 00:03:29.399 EAL: No shared files mode enabled, IPC is disabled 00:03:29.399 EAL: No shared files mode enabled, IPC is disabled 00:03:29.399 EAL: No shared files mode enabled, IPC is disabled 00:03:29.399 00:03:29.399 real 0m5.335s 00:03:29.399 user 0m4.413s 00:03:29.399 sys 0m0.775s 00:03:29.399 19:44:20 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.399 19:44:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:29.399 ************************************ 00:03:29.399 END TEST env_vtophys 00:03:29.399 ************************************ 00:03:29.399 19:44:20 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:29.399 19:44:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.399 19:44:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.399 19:44:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:29.399 ************************************ 00:03:29.399 START TEST env_pci 00:03:29.399 ************************************ 00:03:29.399 19:44:20 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:29.399 00:03:29.399 00:03:29.399 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.399 http://cunit.sourceforge.net/ 00:03:29.399 00:03:29.399 00:03:29.399 Suite: pci 00:03:29.399 Test: pci_hook ...[2024-11-26 19:44:20.112815] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56022 has claimed it 00:03:29.399 passed 00:03:29.399 00:03:29.399 EAL: Cannot find device (10000:00:01.0) 00:03:29.399 EAL: Failed to attach device on primary process 00:03:29.399 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.399 suites 1 1 n/a 0 0 00:03:29.399 tests 1 1 1 0 0 00:03:29.399 asserts 25 25 25 0 n/a 00:03:29.399 00:03:29.399 Elapsed time = 0.005 seconds 00:03:29.399 00:03:29.399 real 0m0.070s 00:03:29.399 user 0m0.028s 00:03:29.399 sys 0m0.042s 00:03:29.399 19:44:20 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.399 19:44:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:29.399 ************************************ 00:03:29.399 END TEST env_pci 00:03:29.399 ************************************ 00:03:29.399 19:44:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:29.399 19:44:20 env -- env/env.sh@15 -- # uname 00:03:29.399 19:44:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:29.399 19:44:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:29.399 19:44:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:29.399 19:44:20 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:29.399 19:44:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.399 19:44:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:29.399 ************************************ 00:03:29.399 START TEST env_dpdk_post_init 00:03:29.399 ************************************ 00:03:29.399 19:44:20 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:29.399 EAL: Detected CPU lcores: 10 00:03:29.399 EAL: Detected NUMA nodes: 1 00:03:29.399 EAL: Detected shared linkage of DPDK 00:03:29.399 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:29.399 EAL: Selected IOVA mode 'PA' 00:03:29.657 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:29.657 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:29.657 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:29.657 Starting DPDK initialization... 00:03:29.657 Starting SPDK post initialization... 00:03:29.657 SPDK NVMe probe 00:03:29.657 Attaching to 0000:00:10.0 00:03:29.657 Attaching to 0000:00:11.0 00:03:29.657 Attached to 0000:00:10.0 00:03:29.657 Attached to 0000:00:11.0 00:03:29.657 Cleaning up... 00:03:29.657 00:03:29.657 real 0m0.237s 00:03:29.657 user 0m0.065s 00:03:29.657 sys 0m0.073s 00:03:29.657 19:44:20 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.657 19:44:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:29.657 ************************************ 00:03:29.657 END TEST env_dpdk_post_init 00:03:29.657 ************************************ 00:03:29.657 19:44:20 env -- env/env.sh@26 -- # uname 00:03:29.657 19:44:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:29.657 19:44:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:29.657 19:44:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.657 19:44:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.657 19:44:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:29.657 ************************************ 00:03:29.657 START TEST env_mem_callbacks 00:03:29.657 ************************************ 00:03:29.657 19:44:20 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:29.657 EAL: Detected CPU lcores: 10 00:03:29.657 EAL: Detected NUMA nodes: 1 00:03:29.657 EAL: Detected shared linkage of DPDK 00:03:29.657 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:29.657 EAL: Selected IOVA mode 'PA' 00:03:29.916 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:29.916 00:03:29.916 00:03:29.916 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.916 http://cunit.sourceforge.net/ 00:03:29.916 00:03:29.916 00:03:29.916 Suite: memory 00:03:29.916 Test: test ... 00:03:29.916 register 0x200000200000 2097152 00:03:29.916 malloc 3145728 00:03:29.916 register 0x200000400000 4194304 00:03:29.916 buf 0x2000004fffc0 len 3145728 PASSED 00:03:29.916 malloc 64 00:03:29.916 buf 0x2000004ffec0 len 64 PASSED 00:03:29.916 malloc 4194304 00:03:29.916 register 0x200000800000 6291456 00:03:29.916 buf 0x2000009fffc0 len 4194304 PASSED 00:03:29.916 free 0x2000004fffc0 3145728 00:03:29.916 free 0x2000004ffec0 64 00:03:29.916 unregister 0x200000400000 4194304 PASSED 00:03:29.916 free 0x2000009fffc0 4194304 00:03:29.916 unregister 0x200000800000 6291456 PASSED 00:03:29.916 malloc 8388608 00:03:29.916 register 0x200000400000 10485760 00:03:29.916 buf 0x2000005fffc0 len 8388608 PASSED 00:03:29.916 free 0x2000005fffc0 8388608 00:03:29.916 unregister 0x200000400000 10485760 PASSED 00:03:29.916 passed 00:03:29.916 00:03:29.916 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.916 suites 1 1 n/a 0 0 00:03:29.916 tests 1 1 1 0 0 00:03:29.916 asserts 15 15 15 0 n/a 00:03:29.916 00:03:29.916 Elapsed time = 0.047 seconds 00:03:29.916 00:03:29.916 real 0m0.213s 00:03:29.916 user 0m0.058s 00:03:29.916 sys 0m0.052s 00:03:29.916 19:44:20 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.916 19:44:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:29.916 ************************************ 00:03:29.916 END TEST env_mem_callbacks 00:03:29.916 ************************************ 00:03:29.916 00:03:29.916 real 0m6.506s 00:03:29.916 user 0m4.959s 00:03:29.916 sys 0m1.170s 00:03:29.916 19:44:20 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.916 19:44:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:29.916 ************************************ 00:03:29.916 END TEST env 00:03:29.916 ************************************ 00:03:29.916 19:44:20 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:29.916 19:44:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.916 19:44:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.916 19:44:20 -- common/autotest_common.sh@10 -- # set +x 00:03:29.916 ************************************ 00:03:29.916 START TEST rpc 00:03:29.916 ************************************ 00:03:29.916 19:44:20 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:29.916 * Looking for test storage... 00:03:29.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:29.916 19:44:20 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:29.916 19:44:20 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:29.916 19:44:20 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:30.175 19:44:20 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:30.175 19:44:20 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:30.175 19:44:20 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:30.175 19:44:20 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:30.175 19:44:20 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:30.175 19:44:20 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:30.175 19:44:20 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:30.175 19:44:20 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:30.175 19:44:20 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:30.175 19:44:20 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:30.175 19:44:20 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:30.175 19:44:20 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:30.175 19:44:20 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:30.175 19:44:20 rpc -- scripts/common.sh@345 -- # : 1 00:03:30.175 19:44:20 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:30.175 19:44:20 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:30.175 19:44:20 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:30.175 19:44:20 rpc -- scripts/common.sh@353 -- # local d=1 00:03:30.175 19:44:20 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:30.175 19:44:20 rpc -- scripts/common.sh@355 -- # echo 1 00:03:30.175 19:44:20 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:30.175 19:44:20 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:30.175 19:44:20 rpc -- scripts/common.sh@353 -- # local d=2 00:03:30.175 19:44:20 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:30.175 19:44:20 rpc -- scripts/common.sh@355 -- # echo 2 00:03:30.175 19:44:20 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:30.175 19:44:20 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:30.175 19:44:20 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:30.175 19:44:20 rpc -- scripts/common.sh@368 -- # return 0 00:03:30.175 19:44:20 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:30.175 19:44:20 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:30.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.175 --rc genhtml_branch_coverage=1 00:03:30.175 --rc genhtml_function_coverage=1 00:03:30.175 --rc genhtml_legend=1 00:03:30.175 --rc geninfo_all_blocks=1 00:03:30.175 --rc geninfo_unexecuted_blocks=1 00:03:30.175 00:03:30.175 ' 00:03:30.175 19:44:20 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:30.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.175 --rc genhtml_branch_coverage=1 00:03:30.175 --rc genhtml_function_coverage=1 00:03:30.175 --rc genhtml_legend=1 00:03:30.175 --rc geninfo_all_blocks=1 00:03:30.175 --rc geninfo_unexecuted_blocks=1 00:03:30.175 00:03:30.175 ' 00:03:30.175 19:44:20 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:30.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.175 --rc genhtml_branch_coverage=1 00:03:30.175 --rc genhtml_function_coverage=1 00:03:30.175 --rc genhtml_legend=1 00:03:30.175 --rc geninfo_all_blocks=1 00:03:30.175 --rc geninfo_unexecuted_blocks=1 00:03:30.175 00:03:30.175 ' 00:03:30.175 19:44:20 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:30.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.175 --rc genhtml_branch_coverage=1 00:03:30.175 --rc genhtml_function_coverage=1 00:03:30.175 --rc genhtml_legend=1 00:03:30.175 --rc geninfo_all_blocks=1 00:03:30.175 --rc geninfo_unexecuted_blocks=1 00:03:30.175 00:03:30.175 ' 00:03:30.175 19:44:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56143 00:03:30.175 19:44:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:30.175 19:44:20 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:30.175 19:44:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56143 00:03:30.175 19:44:20 rpc -- common/autotest_common.sh@835 -- # '[' -z 56143 ']' 00:03:30.175 19:44:20 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:30.175 19:44:20 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:30.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:30.175 19:44:20 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:30.175 19:44:20 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:30.175 19:44:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.175 [2024-11-26 19:44:20.980835] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:03:30.175 [2024-11-26 19:44:20.980966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56143 ] 00:03:30.432 [2024-11-26 19:44:21.141392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.432 [2024-11-26 19:44:21.257836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:30.432 [2024-11-26 19:44:21.257900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56143' to capture a snapshot of events at runtime. 00:03:30.432 [2024-11-26 19:44:21.257910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:30.432 [2024-11-26 19:44:21.257922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:30.432 [2024-11-26 19:44:21.257929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56143 for offline analysis/debug. 00:03:30.432 [2024-11-26 19:44:21.258856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:30.996 19:44:21 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:30.996 19:44:21 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:30.996 19:44:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:30.996 19:44:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:30.996 19:44:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:30.996 19:44:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:30.996 19:44:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:30.996 19:44:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.996 19:44:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.996 ************************************ 00:03:30.996 START TEST rpc_integrity 00:03:30.996 ************************************ 00:03:30.996 19:44:21 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:30.996 19:44:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:30.996 19:44:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:30.996 19:44:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.996 19:44:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:30.996 19:44:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:30.996 19:44:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:31.255 19:44:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:31.255 19:44:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:31.255 19:44:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.255 19:44:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.255 19:44:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.256 19:44:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:31.256 19:44:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:31.256 19:44:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.256 19:44:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.256 19:44:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.256 19:44:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:31.256 { 00:03:31.256 "name": "Malloc0", 00:03:31.256 "aliases": [ 00:03:31.256 "ec05dbae-1c95-4f6d-ae5a-d127c7924b78" 00:03:31.256 ], 00:03:31.256 "product_name": "Malloc disk", 00:03:31.256 "block_size": 512, 00:03:31.256 "num_blocks": 16384, 00:03:31.256 "uuid": "ec05dbae-1c95-4f6d-ae5a-d127c7924b78", 00:03:31.256 "assigned_rate_limits": { 00:03:31.256 "rw_ios_per_sec": 0, 00:03:31.256 "rw_mbytes_per_sec": 0, 00:03:31.256 "r_mbytes_per_sec": 0, 00:03:31.256 "w_mbytes_per_sec": 0 00:03:31.256 }, 00:03:31.256 "claimed": false, 00:03:31.256 "zoned": false, 00:03:31.256 "supported_io_types": { 00:03:31.256 "read": true, 00:03:31.256 "write": true, 00:03:31.256 "unmap": true, 00:03:31.256 "flush": true, 00:03:31.256 "reset": true, 00:03:31.256 "nvme_admin": false, 00:03:31.256 "nvme_io": false, 00:03:31.256 "nvme_io_md": false, 00:03:31.256 "write_zeroes": true, 00:03:31.256 "zcopy": true, 00:03:31.256 "get_zone_info": false, 00:03:31.256 "zone_management": false, 00:03:31.256 "zone_append": false, 00:03:31.256 "compare": false, 00:03:31.256 "compare_and_write": false, 00:03:31.256 "abort": true, 00:03:31.256 "seek_hole": false, 00:03:31.256 "seek_data": false, 00:03:31.256 "copy": true, 00:03:31.256 "nvme_iov_md": false 00:03:31.256 }, 00:03:31.256 "memory_domains": [ 00:03:31.256 { 00:03:31.256 "dma_device_id": "system", 00:03:31.256 "dma_device_type": 1 00:03:31.256 }, 00:03:31.256 { 00:03:31.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.256 "dma_device_type": 2 00:03:31.256 } 00:03:31.256 ], 00:03:31.256 "driver_specific": {} 00:03:31.256 } 00:03:31.256 ]' 00:03:31.256 19:44:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:31.256 19:44:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:31.256 19:44:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:31.256 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.256 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.256 [2024-11-26 19:44:22.023159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:31.256 [2024-11-26 19:44:22.023228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:31.256 [2024-11-26 19:44:22.023255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:03:31.256 [2024-11-26 19:44:22.023271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:31.256 [2024-11-26 19:44:22.025691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:31.256 [2024-11-26 19:44:22.025731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:31.256 Passthru0 00:03:31.256 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.256 19:44:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:31.256 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.256 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.256 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.256 19:44:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:31.256 { 00:03:31.256 "name": "Malloc0", 00:03:31.256 "aliases": [ 00:03:31.256 "ec05dbae-1c95-4f6d-ae5a-d127c7924b78" 00:03:31.256 ], 00:03:31.256 "product_name": "Malloc disk", 00:03:31.256 "block_size": 512, 00:03:31.256 "num_blocks": 16384, 00:03:31.256 "uuid": "ec05dbae-1c95-4f6d-ae5a-d127c7924b78", 00:03:31.256 "assigned_rate_limits": { 00:03:31.256 "rw_ios_per_sec": 0, 00:03:31.256 "rw_mbytes_per_sec": 0, 00:03:31.256 "r_mbytes_per_sec": 0, 00:03:31.256 "w_mbytes_per_sec": 0 00:03:31.256 }, 00:03:31.256 "claimed": true, 00:03:31.256 "claim_type": "exclusive_write", 00:03:31.256 "zoned": false, 00:03:31.256 "supported_io_types": { 00:03:31.256 "read": true, 00:03:31.256 "write": true, 00:03:31.256 "unmap": true, 00:03:31.256 "flush": true, 00:03:31.256 "reset": true, 00:03:31.256 "nvme_admin": false, 00:03:31.256 "nvme_io": false, 00:03:31.256 "nvme_io_md": false, 00:03:31.256 "write_zeroes": true, 00:03:31.256 "zcopy": true, 00:03:31.256 "get_zone_info": false, 00:03:31.256 "zone_management": false, 00:03:31.256 "zone_append": false, 00:03:31.256 "compare": false, 00:03:31.256 "compare_and_write": false, 00:03:31.256 "abort": true, 00:03:31.256 "seek_hole": false, 00:03:31.256 "seek_data": false, 00:03:31.256 "copy": true, 00:03:31.256 "nvme_iov_md": false 00:03:31.256 }, 00:03:31.256 "memory_domains": [ 00:03:31.256 { 00:03:31.256 "dma_device_id": "system", 00:03:31.256 "dma_device_type": 1 00:03:31.256 }, 00:03:31.256 { 00:03:31.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.256 "dma_device_type": 2 00:03:31.256 } 00:03:31.256 ], 00:03:31.256 "driver_specific": {} 00:03:31.256 }, 00:03:31.256 { 00:03:31.256 "name": "Passthru0", 00:03:31.256 "aliases": [ 00:03:31.256 "4e5ce63b-d71b-5296-a53e-d52fd9cea82e" 00:03:31.256 ], 00:03:31.256 "product_name": "passthru", 00:03:31.256 "block_size": 512, 00:03:31.256 "num_blocks": 16384, 00:03:31.256 "uuid": "4e5ce63b-d71b-5296-a53e-d52fd9cea82e", 00:03:31.256 "assigned_rate_limits": { 00:03:31.256 "rw_ios_per_sec": 0, 00:03:31.256 "rw_mbytes_per_sec": 0, 00:03:31.256 "r_mbytes_per_sec": 0, 00:03:31.256 "w_mbytes_per_sec": 0 00:03:31.257 }, 00:03:31.257 "claimed": false, 00:03:31.257 "zoned": false, 00:03:31.257 "supported_io_types": { 00:03:31.257 "read": true, 00:03:31.257 "write": true, 00:03:31.257 "unmap": true, 00:03:31.257 "flush": true, 00:03:31.257 "reset": true, 00:03:31.257 "nvme_admin": false, 00:03:31.257 "nvme_io": false, 00:03:31.257 "nvme_io_md": false, 00:03:31.257 "write_zeroes": true, 00:03:31.257 "zcopy": true, 00:03:31.257 "get_zone_info": false, 00:03:31.257 "zone_management": false, 00:03:31.257 "zone_append": false, 00:03:31.257 "compare": false, 00:03:31.257 "compare_and_write": false, 00:03:31.257 "abort": true, 00:03:31.257 "seek_hole": false, 00:03:31.257 "seek_data": false, 00:03:31.257 "copy": true, 00:03:31.257 "nvme_iov_md": false 00:03:31.257 }, 00:03:31.257 "memory_domains": [ 00:03:31.257 { 00:03:31.257 "dma_device_id": "system", 00:03:31.257 "dma_device_type": 1 00:03:31.257 }, 00:03:31.257 { 00:03:31.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.257 "dma_device_type": 2 00:03:31.257 } 00:03:31.257 ], 00:03:31.257 "driver_specific": { 00:03:31.257 "passthru": { 00:03:31.257 "name": "Passthru0", 00:03:31.257 "base_bdev_name": "Malloc0" 00:03:31.257 } 00:03:31.257 } 00:03:31.257 } 00:03:31.257 ]' 00:03:31.257 19:44:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:31.257 19:44:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:31.257 19:44:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:31.257 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.257 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.257 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.257 19:44:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:31.257 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.257 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.257 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.257 19:44:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:31.257 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.257 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.257 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.257 19:44:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:31.257 19:44:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:31.257 19:44:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:31.257 00:03:31.257 real 0m0.248s 00:03:31.257 user 0m0.125s 00:03:31.257 sys 0m0.031s 00:03:31.257 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.257 19:44:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.257 ************************************ 00:03:31.257 END TEST rpc_integrity 00:03:31.257 ************************************ 00:03:31.608 19:44:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:31.608 19:44:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.608 19:44:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.608 19:44:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.608 ************************************ 00:03:31.608 START TEST rpc_plugins 00:03:31.608 ************************************ 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:31.608 19:44:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.608 19:44:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:31.608 19:44:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.608 19:44:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:31.608 { 00:03:31.608 "name": "Malloc1", 00:03:31.608 "aliases": [ 00:03:31.608 "1e3ad051-3eaa-4e63-bd85-5948666b68b6" 00:03:31.608 ], 00:03:31.608 "product_name": "Malloc disk", 00:03:31.608 "block_size": 4096, 00:03:31.608 "num_blocks": 256, 00:03:31.608 "uuid": "1e3ad051-3eaa-4e63-bd85-5948666b68b6", 00:03:31.608 "assigned_rate_limits": { 00:03:31.608 "rw_ios_per_sec": 0, 00:03:31.608 "rw_mbytes_per_sec": 0, 00:03:31.608 "r_mbytes_per_sec": 0, 00:03:31.608 "w_mbytes_per_sec": 0 00:03:31.608 }, 00:03:31.608 "claimed": false, 00:03:31.608 "zoned": false, 00:03:31.608 "supported_io_types": { 00:03:31.608 "read": true, 00:03:31.608 "write": true, 00:03:31.608 "unmap": true, 00:03:31.608 "flush": true, 00:03:31.608 "reset": true, 00:03:31.608 "nvme_admin": false, 00:03:31.608 "nvme_io": false, 00:03:31.608 "nvme_io_md": false, 00:03:31.608 "write_zeroes": true, 00:03:31.608 "zcopy": true, 00:03:31.608 "get_zone_info": false, 00:03:31.608 "zone_management": false, 00:03:31.608 "zone_append": false, 00:03:31.608 "compare": false, 00:03:31.608 "compare_and_write": false, 00:03:31.608 "abort": true, 00:03:31.608 "seek_hole": false, 00:03:31.608 "seek_data": false, 00:03:31.608 "copy": true, 00:03:31.608 "nvme_iov_md": false 00:03:31.608 }, 00:03:31.608 "memory_domains": [ 00:03:31.608 { 00:03:31.608 "dma_device_id": "system", 00:03:31.608 "dma_device_type": 1 00:03:31.608 }, 00:03:31.608 { 00:03:31.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.608 "dma_device_type": 2 00:03:31.608 } 00:03:31.608 ], 00:03:31.608 "driver_specific": {} 00:03:31.608 } 00:03:31.608 ]' 00:03:31.608 19:44:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:31.608 19:44:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:31.608 19:44:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.608 19:44:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.608 19:44:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:31.608 19:44:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:31.608 19:44:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:31.608 00:03:31.608 real 0m0.111s 00:03:31.608 user 0m0.059s 00:03:31.608 sys 0m0.015s 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.608 19:44:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:31.608 ************************************ 00:03:31.608 END TEST rpc_plugins 00:03:31.608 ************************************ 00:03:31.608 19:44:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:31.608 19:44:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.608 19:44:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.608 19:44:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.608 ************************************ 00:03:31.608 START TEST rpc_trace_cmd_test 00:03:31.608 ************************************ 00:03:31.608 19:44:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:31.608 19:44:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:31.608 19:44:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:31.608 19:44:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.608 19:44:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:31.609 19:44:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.609 19:44:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:31.609 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56143", 00:03:31.609 "tpoint_group_mask": "0x8", 00:03:31.609 "iscsi_conn": { 00:03:31.609 "mask": "0x2", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "scsi": { 00:03:31.609 "mask": "0x4", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "bdev": { 00:03:31.609 "mask": "0x8", 00:03:31.609 "tpoint_mask": "0xffffffffffffffff" 00:03:31.609 }, 00:03:31.609 "nvmf_rdma": { 00:03:31.609 "mask": "0x10", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "nvmf_tcp": { 00:03:31.609 "mask": "0x20", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "ftl": { 00:03:31.609 "mask": "0x40", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "blobfs": { 00:03:31.609 "mask": "0x80", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "dsa": { 00:03:31.609 "mask": "0x200", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "thread": { 00:03:31.609 "mask": "0x400", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "nvme_pcie": { 00:03:31.609 "mask": "0x800", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "iaa": { 00:03:31.609 "mask": "0x1000", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "nvme_tcp": { 00:03:31.609 "mask": "0x2000", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "bdev_nvme": { 00:03:31.609 "mask": "0x4000", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "sock": { 00:03:31.609 "mask": "0x8000", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "blob": { 00:03:31.609 "mask": "0x10000", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "bdev_raid": { 00:03:31.609 "mask": "0x20000", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 }, 00:03:31.609 "scheduler": { 00:03:31.609 "mask": "0x40000", 00:03:31.609 "tpoint_mask": "0x0" 00:03:31.609 } 00:03:31.609 }' 00:03:31.609 19:44:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:31.609 19:44:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:31.609 19:44:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:31.609 19:44:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:31.609 19:44:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:31.609 19:44:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:31.609 19:44:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:31.907 19:44:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:31.907 19:44:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:31.907 19:44:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:31.907 00:03:31.907 real 0m0.174s 00:03:31.907 user 0m0.138s 00:03:31.907 sys 0m0.025s 00:03:31.907 19:44:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.907 19:44:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:31.907 ************************************ 00:03:31.907 END TEST rpc_trace_cmd_test 00:03:31.907 ************************************ 00:03:31.907 19:44:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:31.907 19:44:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:31.907 19:44:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:31.907 19:44:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.907 19:44:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.907 19:44:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.907 ************************************ 00:03:31.907 START TEST rpc_daemon_integrity 00:03:31.907 ************************************ 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:31.907 { 00:03:31.907 "name": "Malloc2", 00:03:31.907 "aliases": [ 00:03:31.907 "29f55732-0684-426a-9a37-9d373778bce5" 00:03:31.907 ], 00:03:31.907 "product_name": "Malloc disk", 00:03:31.907 "block_size": 512, 00:03:31.907 "num_blocks": 16384, 00:03:31.907 "uuid": "29f55732-0684-426a-9a37-9d373778bce5", 00:03:31.907 "assigned_rate_limits": { 00:03:31.907 "rw_ios_per_sec": 0, 00:03:31.907 "rw_mbytes_per_sec": 0, 00:03:31.907 "r_mbytes_per_sec": 0, 00:03:31.907 "w_mbytes_per_sec": 0 00:03:31.907 }, 00:03:31.907 "claimed": false, 00:03:31.907 "zoned": false, 00:03:31.907 "supported_io_types": { 00:03:31.907 "read": true, 00:03:31.907 "write": true, 00:03:31.907 "unmap": true, 00:03:31.907 "flush": true, 00:03:31.907 "reset": true, 00:03:31.907 "nvme_admin": false, 00:03:31.907 "nvme_io": false, 00:03:31.907 "nvme_io_md": false, 00:03:31.907 "write_zeroes": true, 00:03:31.907 "zcopy": true, 00:03:31.907 "get_zone_info": false, 00:03:31.907 "zone_management": false, 00:03:31.907 "zone_append": false, 00:03:31.907 "compare": false, 00:03:31.907 "compare_and_write": false, 00:03:31.907 "abort": true, 00:03:31.907 "seek_hole": false, 00:03:31.907 "seek_data": false, 00:03:31.907 "copy": true, 00:03:31.907 "nvme_iov_md": false 00:03:31.907 }, 00:03:31.907 "memory_domains": [ 00:03:31.907 { 00:03:31.907 "dma_device_id": "system", 00:03:31.907 "dma_device_type": 1 00:03:31.907 }, 00:03:31.907 { 00:03:31.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.907 "dma_device_type": 2 00:03:31.907 } 00:03:31.907 ], 00:03:31.907 "driver_specific": {} 00:03:31.907 } 00:03:31.907 ]' 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.907 [2024-11-26 19:44:22.676975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:31.907 [2024-11-26 19:44:22.677041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:31.907 [2024-11-26 19:44:22.677064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:03:31.907 [2024-11-26 19:44:22.677076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:31.907 [2024-11-26 19:44:22.679423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:31.907 [2024-11-26 19:44:22.679462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:31.907 Passthru0 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.907 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:31.907 { 00:03:31.907 "name": "Malloc2", 00:03:31.907 "aliases": [ 00:03:31.907 "29f55732-0684-426a-9a37-9d373778bce5" 00:03:31.907 ], 00:03:31.907 "product_name": "Malloc disk", 00:03:31.907 "block_size": 512, 00:03:31.907 "num_blocks": 16384, 00:03:31.907 "uuid": "29f55732-0684-426a-9a37-9d373778bce5", 00:03:31.907 "assigned_rate_limits": { 00:03:31.907 "rw_ios_per_sec": 0, 00:03:31.907 "rw_mbytes_per_sec": 0, 00:03:31.907 "r_mbytes_per_sec": 0, 00:03:31.907 "w_mbytes_per_sec": 0 00:03:31.907 }, 00:03:31.907 "claimed": true, 00:03:31.907 "claim_type": "exclusive_write", 00:03:31.907 "zoned": false, 00:03:31.907 "supported_io_types": { 00:03:31.907 "read": true, 00:03:31.907 "write": true, 00:03:31.907 "unmap": true, 00:03:31.907 "flush": true, 00:03:31.907 "reset": true, 00:03:31.907 "nvme_admin": false, 00:03:31.907 "nvme_io": false, 00:03:31.907 "nvme_io_md": false, 00:03:31.907 "write_zeroes": true, 00:03:31.907 "zcopy": true, 00:03:31.907 "get_zone_info": false, 00:03:31.907 "zone_management": false, 00:03:31.907 "zone_append": false, 00:03:31.907 "compare": false, 00:03:31.907 "compare_and_write": false, 00:03:31.907 "abort": true, 00:03:31.908 "seek_hole": false, 00:03:31.908 "seek_data": false, 00:03:31.908 "copy": true, 00:03:31.908 "nvme_iov_md": false 00:03:31.908 }, 00:03:31.908 "memory_domains": [ 00:03:31.908 { 00:03:31.908 "dma_device_id": "system", 00:03:31.908 "dma_device_type": 1 00:03:31.908 }, 00:03:31.908 { 00:03:31.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.908 "dma_device_type": 2 00:03:31.908 } 00:03:31.908 ], 00:03:31.908 "driver_specific": {} 00:03:31.908 }, 00:03:31.908 { 00:03:31.908 "name": "Passthru0", 00:03:31.908 "aliases": [ 00:03:31.908 "911fb165-f8b5-59d2-8c20-951fe51d652c" 00:03:31.908 ], 00:03:31.908 "product_name": "passthru", 00:03:31.908 "block_size": 512, 00:03:31.908 "num_blocks": 16384, 00:03:31.908 "uuid": "911fb165-f8b5-59d2-8c20-951fe51d652c", 00:03:31.908 "assigned_rate_limits": { 00:03:31.908 "rw_ios_per_sec": 0, 00:03:31.908 "rw_mbytes_per_sec": 0, 00:03:31.908 "r_mbytes_per_sec": 0, 00:03:31.908 "w_mbytes_per_sec": 0 00:03:31.908 }, 00:03:31.908 "claimed": false, 00:03:31.908 "zoned": false, 00:03:31.908 "supported_io_types": { 00:03:31.908 "read": true, 00:03:31.908 "write": true, 00:03:31.908 "unmap": true, 00:03:31.908 "flush": true, 00:03:31.908 "reset": true, 00:03:31.908 "nvme_admin": false, 00:03:31.908 "nvme_io": false, 00:03:31.908 "nvme_io_md": false, 00:03:31.908 "write_zeroes": true, 00:03:31.908 "zcopy": true, 00:03:31.908 "get_zone_info": false, 00:03:31.908 "zone_management": false, 00:03:31.908 "zone_append": false, 00:03:31.908 "compare": false, 00:03:31.908 "compare_and_write": false, 00:03:31.908 "abort": true, 00:03:31.908 "seek_hole": false, 00:03:31.908 "seek_data": false, 00:03:31.908 "copy": true, 00:03:31.908 "nvme_iov_md": false 00:03:31.908 }, 00:03:31.908 "memory_domains": [ 00:03:31.908 { 00:03:31.908 "dma_device_id": "system", 00:03:31.908 "dma_device_type": 1 00:03:31.908 }, 00:03:31.908 { 00:03:31.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.908 "dma_device_type": 2 00:03:31.908 } 00:03:31.908 ], 00:03:31.908 "driver_specific": { 00:03:31.908 "passthru": { 00:03:31.908 "name": "Passthru0", 00:03:31.908 "base_bdev_name": "Malloc2" 00:03:31.908 } 00:03:31.908 } 00:03:31.908 } 00:03:31.908 ]' 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:31.908 00:03:31.908 real 0m0.248s 00:03:31.908 user 0m0.129s 00:03:31.908 sys 0m0.036s 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.908 19:44:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.908 ************************************ 00:03:31.908 END TEST rpc_daemon_integrity 00:03:31.908 ************************************ 00:03:32.167 19:44:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:32.167 19:44:22 rpc -- rpc/rpc.sh@84 -- # killprocess 56143 00:03:32.167 19:44:22 rpc -- common/autotest_common.sh@954 -- # '[' -z 56143 ']' 00:03:32.167 19:44:22 rpc -- common/autotest_common.sh@958 -- # kill -0 56143 00:03:32.167 19:44:22 rpc -- common/autotest_common.sh@959 -- # uname 00:03:32.167 19:44:22 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:32.167 19:44:22 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56143 00:03:32.167 19:44:22 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:32.167 19:44:22 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:32.167 killing process with pid 56143 00:03:32.167 19:44:22 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56143' 00:03:32.167 19:44:22 rpc -- common/autotest_common.sh@973 -- # kill 56143 00:03:32.167 19:44:22 rpc -- common/autotest_common.sh@978 -- # wait 56143 00:03:34.064 00:03:34.064 real 0m3.718s 00:03:34.064 user 0m4.066s 00:03:34.064 sys 0m0.678s 00:03:34.064 19:44:24 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.064 ************************************ 00:03:34.064 END TEST rpc 00:03:34.064 ************************************ 00:03:34.064 19:44:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.064 19:44:24 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:34.064 19:44:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.064 19:44:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.064 19:44:24 -- common/autotest_common.sh@10 -- # set +x 00:03:34.064 ************************************ 00:03:34.064 START TEST skip_rpc 00:03:34.064 ************************************ 00:03:34.064 19:44:24 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:34.064 * Looking for test storage... 00:03:34.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:34.064 19:44:24 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:34.064 19:44:24 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:34.064 19:44:24 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:34.064 19:44:24 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.064 19:44:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:34.064 19:44:24 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.064 19:44:24 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:34.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.064 --rc genhtml_branch_coverage=1 00:03:34.064 --rc genhtml_function_coverage=1 00:03:34.064 --rc genhtml_legend=1 00:03:34.064 --rc geninfo_all_blocks=1 00:03:34.064 --rc geninfo_unexecuted_blocks=1 00:03:34.064 00:03:34.064 ' 00:03:34.064 19:44:24 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:34.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.064 --rc genhtml_branch_coverage=1 00:03:34.064 --rc genhtml_function_coverage=1 00:03:34.064 --rc genhtml_legend=1 00:03:34.064 --rc geninfo_all_blocks=1 00:03:34.064 --rc geninfo_unexecuted_blocks=1 00:03:34.064 00:03:34.064 ' 00:03:34.064 19:44:24 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:34.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.064 --rc genhtml_branch_coverage=1 00:03:34.064 --rc genhtml_function_coverage=1 00:03:34.064 --rc genhtml_legend=1 00:03:34.064 --rc geninfo_all_blocks=1 00:03:34.064 --rc geninfo_unexecuted_blocks=1 00:03:34.064 00:03:34.064 ' 00:03:34.064 19:44:24 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:34.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.064 --rc genhtml_branch_coverage=1 00:03:34.065 --rc genhtml_function_coverage=1 00:03:34.065 --rc genhtml_legend=1 00:03:34.065 --rc geninfo_all_blocks=1 00:03:34.065 --rc geninfo_unexecuted_blocks=1 00:03:34.065 00:03:34.065 ' 00:03:34.065 19:44:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:34.065 19:44:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:34.065 19:44:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:34.065 19:44:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.065 19:44:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.065 19:44:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.065 ************************************ 00:03:34.065 START TEST skip_rpc 00:03:34.065 ************************************ 00:03:34.065 19:44:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:34.065 19:44:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56361 00:03:34.065 19:44:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:34.065 19:44:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:34.065 19:44:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:34.065 [2024-11-26 19:44:24.740322] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:03:34.065 [2024-11-26 19:44:24.740476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56361 ] 00:03:34.065 [2024-11-26 19:44:24.902364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:34.321 [2024-11-26 19:44:25.019694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56361 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56361 ']' 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56361 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56361 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:39.612 killing process with pid 56361 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56361' 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56361 00:03:39.612 19:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56361 00:03:40.177 00:03:40.177 real 0m6.330s 00:03:40.177 user 0m5.897s 00:03:40.177 sys 0m0.326s 00:03:40.177 19:44:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.177 19:44:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.177 ************************************ 00:03:40.177 END TEST skip_rpc 00:03:40.177 ************************************ 00:03:40.177 19:44:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:40.177 19:44:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.177 19:44:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.177 19:44:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.177 ************************************ 00:03:40.177 START TEST skip_rpc_with_json 00:03:40.177 ************************************ 00:03:40.177 19:44:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:40.177 19:44:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:40.177 19:44:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56454 00:03:40.177 19:44:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:40.177 19:44:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56454 00:03:40.177 19:44:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56454 ']' 00:03:40.177 19:44:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:40.177 19:44:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:40.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:40.177 19:44:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:40.177 19:44:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:40.177 19:44:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:40.177 19:44:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:40.177 [2024-11-26 19:44:31.111933] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:03:40.177 [2024-11-26 19:44:31.112388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56454 ] 00:03:40.442 [2024-11-26 19:44:31.270646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.715 [2024-11-26 19:44:31.388073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.282 [2024-11-26 19:44:32.048692] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:41.282 request: 00:03:41.282 { 00:03:41.282 "trtype": "tcp", 00:03:41.282 "method": "nvmf_get_transports", 00:03:41.282 "req_id": 1 00:03:41.282 } 00:03:41.282 Got JSON-RPC error response 00:03:41.282 response: 00:03:41.282 { 00:03:41.282 "code": -19, 00:03:41.282 "message": "No such device" 00:03:41.282 } 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.282 [2024-11-26 19:44:32.060880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.282 19:44:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:41.540 { 00:03:41.540 "subsystems": [ 00:03:41.540 { 00:03:41.540 "subsystem": "fsdev", 00:03:41.540 "config": [ 00:03:41.540 { 00:03:41.540 "method": "fsdev_set_opts", 00:03:41.540 "params": { 00:03:41.540 "fsdev_io_pool_size": 65535, 00:03:41.540 "fsdev_io_cache_size": 256 00:03:41.540 } 00:03:41.540 } 00:03:41.540 ] 00:03:41.540 }, 00:03:41.540 { 00:03:41.540 "subsystem": "keyring", 00:03:41.540 "config": [] 00:03:41.540 }, 00:03:41.540 { 00:03:41.540 "subsystem": "iobuf", 00:03:41.540 "config": [ 00:03:41.540 { 00:03:41.540 "method": "iobuf_set_options", 00:03:41.540 "params": { 00:03:41.540 "small_pool_count": 8192, 00:03:41.540 "large_pool_count": 1024, 00:03:41.540 "small_bufsize": 8192, 00:03:41.540 "large_bufsize": 135168, 00:03:41.540 "enable_numa": false 00:03:41.540 } 00:03:41.540 } 00:03:41.540 ] 00:03:41.540 }, 00:03:41.540 { 00:03:41.540 "subsystem": "sock", 00:03:41.540 "config": [ 00:03:41.540 { 00:03:41.540 "method": "sock_set_default_impl", 00:03:41.540 "params": { 00:03:41.540 "impl_name": "posix" 00:03:41.540 } 00:03:41.540 }, 00:03:41.540 { 00:03:41.540 "method": "sock_impl_set_options", 00:03:41.540 "params": { 00:03:41.540 "impl_name": "ssl", 00:03:41.540 "recv_buf_size": 4096, 00:03:41.540 "send_buf_size": 4096, 00:03:41.540 "enable_recv_pipe": true, 00:03:41.540 "enable_quickack": false, 00:03:41.540 "enable_placement_id": 0, 00:03:41.540 "enable_zerocopy_send_server": true, 00:03:41.540 "enable_zerocopy_send_client": false, 00:03:41.540 "zerocopy_threshold": 0, 00:03:41.540 "tls_version": 0, 00:03:41.540 "enable_ktls": false 00:03:41.540 } 00:03:41.540 }, 00:03:41.540 { 00:03:41.540 "method": "sock_impl_set_options", 00:03:41.540 "params": { 00:03:41.540 "impl_name": "posix", 00:03:41.540 "recv_buf_size": 2097152, 00:03:41.540 "send_buf_size": 2097152, 00:03:41.540 "enable_recv_pipe": true, 00:03:41.540 "enable_quickack": false, 00:03:41.540 "enable_placement_id": 0, 00:03:41.540 "enable_zerocopy_send_server": true, 00:03:41.540 "enable_zerocopy_send_client": false, 00:03:41.540 "zerocopy_threshold": 0, 00:03:41.540 "tls_version": 0, 00:03:41.540 "enable_ktls": false 00:03:41.541 } 00:03:41.541 } 00:03:41.541 ] 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "subsystem": "vmd", 00:03:41.541 "config": [] 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "subsystem": "accel", 00:03:41.541 "config": [ 00:03:41.541 { 00:03:41.541 "method": "accel_set_options", 00:03:41.541 "params": { 00:03:41.541 "small_cache_size": 128, 00:03:41.541 "large_cache_size": 16, 00:03:41.541 "task_count": 2048, 00:03:41.541 "sequence_count": 2048, 00:03:41.541 "buf_count": 2048 00:03:41.541 } 00:03:41.541 } 00:03:41.541 ] 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "subsystem": "bdev", 00:03:41.541 "config": [ 00:03:41.541 { 00:03:41.541 "method": "bdev_set_options", 00:03:41.541 "params": { 00:03:41.541 "bdev_io_pool_size": 65535, 00:03:41.541 "bdev_io_cache_size": 256, 00:03:41.541 "bdev_auto_examine": true, 00:03:41.541 "iobuf_small_cache_size": 128, 00:03:41.541 "iobuf_large_cache_size": 16 00:03:41.541 } 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "method": "bdev_raid_set_options", 00:03:41.541 "params": { 00:03:41.541 "process_window_size_kb": 1024, 00:03:41.541 "process_max_bandwidth_mb_sec": 0 00:03:41.541 } 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "method": "bdev_iscsi_set_options", 00:03:41.541 "params": { 00:03:41.541 "timeout_sec": 30 00:03:41.541 } 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "method": "bdev_nvme_set_options", 00:03:41.541 "params": { 00:03:41.541 "action_on_timeout": "none", 00:03:41.541 "timeout_us": 0, 00:03:41.541 "timeout_admin_us": 0, 00:03:41.541 "keep_alive_timeout_ms": 10000, 00:03:41.541 "arbitration_burst": 0, 00:03:41.541 "low_priority_weight": 0, 00:03:41.541 "medium_priority_weight": 0, 00:03:41.541 "high_priority_weight": 0, 00:03:41.541 "nvme_adminq_poll_period_us": 10000, 00:03:41.541 "nvme_ioq_poll_period_us": 0, 00:03:41.541 "io_queue_requests": 0, 00:03:41.541 "delay_cmd_submit": true, 00:03:41.541 "transport_retry_count": 4, 00:03:41.541 "bdev_retry_count": 3, 00:03:41.541 "transport_ack_timeout": 0, 00:03:41.541 "ctrlr_loss_timeout_sec": 0, 00:03:41.541 "reconnect_delay_sec": 0, 00:03:41.541 "fast_io_fail_timeout_sec": 0, 00:03:41.541 "disable_auto_failback": false, 00:03:41.541 "generate_uuids": false, 00:03:41.541 "transport_tos": 0, 00:03:41.541 "nvme_error_stat": false, 00:03:41.541 "rdma_srq_size": 0, 00:03:41.541 "io_path_stat": false, 00:03:41.541 "allow_accel_sequence": false, 00:03:41.541 "rdma_max_cq_size": 0, 00:03:41.541 "rdma_cm_event_timeout_ms": 0, 00:03:41.541 "dhchap_digests": [ 00:03:41.541 "sha256", 00:03:41.541 "sha384", 00:03:41.541 "sha512" 00:03:41.541 ], 00:03:41.541 "dhchap_dhgroups": [ 00:03:41.541 "null", 00:03:41.541 "ffdhe2048", 00:03:41.541 "ffdhe3072", 00:03:41.541 "ffdhe4096", 00:03:41.541 "ffdhe6144", 00:03:41.541 "ffdhe8192" 00:03:41.541 ] 00:03:41.541 } 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "method": "bdev_nvme_set_hotplug", 00:03:41.541 "params": { 00:03:41.541 "period_us": 100000, 00:03:41.541 "enable": false 00:03:41.541 } 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "method": "bdev_wait_for_examine" 00:03:41.541 } 00:03:41.541 ] 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "subsystem": "scsi", 00:03:41.541 "config": null 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "subsystem": "scheduler", 00:03:41.541 "config": [ 00:03:41.541 { 00:03:41.541 "method": "framework_set_scheduler", 00:03:41.541 "params": { 00:03:41.541 "name": "static" 00:03:41.541 } 00:03:41.541 } 00:03:41.541 ] 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "subsystem": "vhost_scsi", 00:03:41.541 "config": [] 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "subsystem": "vhost_blk", 00:03:41.541 "config": [] 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "subsystem": "ublk", 00:03:41.541 "config": [] 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "subsystem": "nbd", 00:03:41.541 "config": [] 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "subsystem": "nvmf", 00:03:41.541 "config": [ 00:03:41.541 { 00:03:41.541 "method": "nvmf_set_config", 00:03:41.541 "params": { 00:03:41.541 "discovery_filter": "match_any", 00:03:41.541 "admin_cmd_passthru": { 00:03:41.541 "identify_ctrlr": false 00:03:41.541 }, 00:03:41.541 "dhchap_digests": [ 00:03:41.541 "sha256", 00:03:41.541 "sha384", 00:03:41.541 "sha512" 00:03:41.541 ], 00:03:41.541 "dhchap_dhgroups": [ 00:03:41.541 "null", 00:03:41.541 "ffdhe2048", 00:03:41.541 "ffdhe3072", 00:03:41.541 "ffdhe4096", 00:03:41.541 "ffdhe6144", 00:03:41.541 "ffdhe8192" 00:03:41.541 ] 00:03:41.541 } 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "method": "nvmf_set_max_subsystems", 00:03:41.541 "params": { 00:03:41.541 "max_subsystems": 1024 00:03:41.541 } 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "method": "nvmf_set_crdt", 00:03:41.541 "params": { 00:03:41.541 "crdt1": 0, 00:03:41.541 "crdt2": 0, 00:03:41.541 "crdt3": 0 00:03:41.541 } 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "method": "nvmf_create_transport", 00:03:41.541 "params": { 00:03:41.541 "trtype": "TCP", 00:03:41.541 "max_queue_depth": 128, 00:03:41.541 "max_io_qpairs_per_ctrlr": 127, 00:03:41.541 "in_capsule_data_size": 4096, 00:03:41.541 "max_io_size": 131072, 00:03:41.541 "io_unit_size": 131072, 00:03:41.541 "max_aq_depth": 128, 00:03:41.541 "num_shared_buffers": 511, 00:03:41.541 "buf_cache_size": 4294967295, 00:03:41.541 "dif_insert_or_strip": false, 00:03:41.541 "zcopy": false, 00:03:41.541 "c2h_success": true, 00:03:41.541 "sock_priority": 0, 00:03:41.541 "abort_timeout_sec": 1, 00:03:41.541 "ack_timeout": 0, 00:03:41.541 "data_wr_pool_size": 0 00:03:41.541 } 00:03:41.541 } 00:03:41.541 ] 00:03:41.541 }, 00:03:41.541 { 00:03:41.541 "subsystem": "iscsi", 00:03:41.541 "config": [ 00:03:41.541 { 00:03:41.541 "method": "iscsi_set_options", 00:03:41.541 "params": { 00:03:41.541 "node_base": "iqn.2016-06.io.spdk", 00:03:41.541 "max_sessions": 128, 00:03:41.541 "max_connections_per_session": 2, 00:03:41.541 "max_queue_depth": 64, 00:03:41.541 "default_time2wait": 2, 00:03:41.541 "default_time2retain": 20, 00:03:41.541 "first_burst_length": 8192, 00:03:41.541 "immediate_data": true, 00:03:41.541 "allow_duplicated_isid": false, 00:03:41.541 "error_recovery_level": 0, 00:03:41.541 "nop_timeout": 60, 00:03:41.541 "nop_in_interval": 30, 00:03:41.541 "disable_chap": false, 00:03:41.541 "require_chap": false, 00:03:41.541 "mutual_chap": false, 00:03:41.541 "chap_group": 0, 00:03:41.541 "max_large_datain_per_connection": 64, 00:03:41.541 "max_r2t_per_connection": 4, 00:03:41.541 "pdu_pool_size": 36864, 00:03:41.541 "immediate_data_pool_size": 16384, 00:03:41.541 "data_out_pool_size": 2048 00:03:41.541 } 00:03:41.541 } 00:03:41.541 ] 00:03:41.541 } 00:03:41.541 ] 00:03:41.541 } 00:03:41.541 19:44:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:41.541 19:44:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56454 00:03:41.541 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56454 ']' 00:03:41.541 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56454 00:03:41.541 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:41.541 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:41.541 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56454 00:03:41.541 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:41.541 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:41.541 killing process with pid 56454 00:03:41.541 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56454' 00:03:41.541 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56454 00:03:41.541 19:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56454 00:03:42.934 19:44:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56499 00:03:42.934 19:44:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:42.934 19:44:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:48.192 19:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56499 00:03:48.192 19:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56499 ']' 00:03:48.192 19:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56499 00:03:48.192 19:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:48.192 19:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:48.192 19:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56499 00:03:48.192 19:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:48.192 19:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:48.192 killing process with pid 56499 00:03:48.192 19:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56499' 00:03:48.192 19:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56499 00:03:48.192 19:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56499 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:49.574 00:03:49.574 real 0m9.155s 00:03:49.574 user 0m8.619s 00:03:49.574 sys 0m0.717s 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:49.574 ************************************ 00:03:49.574 END TEST skip_rpc_with_json 00:03:49.574 ************************************ 00:03:49.574 19:44:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:49.574 19:44:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.574 19:44:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.574 19:44:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.574 ************************************ 00:03:49.574 START TEST skip_rpc_with_delay 00:03:49.574 ************************************ 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:49.574 [2024-11-26 19:44:40.306667] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:49.574 ************************************ 00:03:49.574 END TEST skip_rpc_with_delay 00:03:49.574 ************************************ 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:49.574 00:03:49.574 real 0m0.126s 00:03:49.574 user 0m0.070s 00:03:49.574 sys 0m0.055s 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.574 19:44:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:49.574 19:44:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:49.574 19:44:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:49.574 19:44:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:49.574 19:44:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.574 19:44:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.574 19:44:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.574 ************************************ 00:03:49.574 START TEST exit_on_failed_rpc_init 00:03:49.574 ************************************ 00:03:49.574 19:44:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:49.574 19:44:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=56622 00:03:49.574 19:44:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 56622 00:03:49.574 19:44:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 56622 ']' 00:03:49.574 19:44:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:49.574 19:44:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:49.574 19:44:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:49.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:49.574 19:44:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:49.574 19:44:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:49.574 19:44:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:49.574 [2024-11-26 19:44:40.467127] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:03:49.574 [2024-11-26 19:44:40.467255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56622 ] 00:03:49.832 [2024-11-26 19:44:40.622211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.832 [2024-11-26 19:44:40.722256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:03:50.399 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:03:50.658 [2024-11-26 19:44:41.350486] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:03:50.658 [2024-11-26 19:44:41.350620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56634 ] 00:03:50.658 [2024-11-26 19:44:41.512928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.928 [2024-11-26 19:44:41.614274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:50.928 [2024-11-26 19:44:41.614378] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:50.928 [2024-11-26 19:44:41.614392] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:50.928 [2024-11-26 19:44:41.614404] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 56622 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 56622 ']' 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 56622 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56622 00:03:50.928 killing process with pid 56622 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56622' 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 56622 00:03:50.928 19:44:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 56622 00:03:52.301 00:03:52.301 real 0m2.741s 00:03:52.301 user 0m2.978s 00:03:52.301 sys 0m0.462s 00:03:52.301 19:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.301 19:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:52.301 ************************************ 00:03:52.301 END TEST exit_on_failed_rpc_init 00:03:52.301 ************************************ 00:03:52.301 19:44:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:52.301 00:03:52.301 real 0m18.655s 00:03:52.301 user 0m17.683s 00:03:52.301 sys 0m1.750s 00:03:52.301 19:44:43 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.301 19:44:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.301 ************************************ 00:03:52.301 END TEST skip_rpc 00:03:52.301 ************************************ 00:03:52.301 19:44:43 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:52.301 19:44:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.301 19:44:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.301 19:44:43 -- common/autotest_common.sh@10 -- # set +x 00:03:52.301 ************************************ 00:03:52.301 START TEST rpc_client 00:03:52.301 ************************************ 00:03:52.301 19:44:43 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:52.561 * Looking for test storage... 00:03:52.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:03:52.561 19:44:43 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:52.561 19:44:43 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:52.561 19:44:43 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:52.561 19:44:43 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.561 19:44:43 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:52.561 19:44:43 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.561 19:44:43 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.561 --rc genhtml_branch_coverage=1 00:03:52.561 --rc genhtml_function_coverage=1 00:03:52.561 --rc genhtml_legend=1 00:03:52.561 --rc geninfo_all_blocks=1 00:03:52.561 --rc geninfo_unexecuted_blocks=1 00:03:52.561 00:03:52.561 ' 00:03:52.561 19:44:43 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.561 --rc genhtml_branch_coverage=1 00:03:52.561 --rc genhtml_function_coverage=1 00:03:52.561 --rc genhtml_legend=1 00:03:52.561 --rc geninfo_all_blocks=1 00:03:52.561 --rc geninfo_unexecuted_blocks=1 00:03:52.561 00:03:52.561 ' 00:03:52.561 19:44:43 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.561 --rc genhtml_branch_coverage=1 00:03:52.561 --rc genhtml_function_coverage=1 00:03:52.561 --rc genhtml_legend=1 00:03:52.561 --rc geninfo_all_blocks=1 00:03:52.561 --rc geninfo_unexecuted_blocks=1 00:03:52.561 00:03:52.561 ' 00:03:52.561 19:44:43 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.561 --rc genhtml_branch_coverage=1 00:03:52.561 --rc genhtml_function_coverage=1 00:03:52.561 --rc genhtml_legend=1 00:03:52.561 --rc geninfo_all_blocks=1 00:03:52.561 --rc geninfo_unexecuted_blocks=1 00:03:52.561 00:03:52.561 ' 00:03:52.561 19:44:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:03:52.561 OK 00:03:52.561 19:44:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:52.561 00:03:52.561 real 0m0.193s 00:03:52.561 user 0m0.107s 00:03:52.561 sys 0m0.091s 00:03:52.561 19:44:43 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.561 19:44:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:52.561 ************************************ 00:03:52.561 END TEST rpc_client 00:03:52.561 ************************************ 00:03:52.561 19:44:43 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:52.561 19:44:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.561 19:44:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.561 19:44:43 -- common/autotest_common.sh@10 -- # set +x 00:03:52.561 ************************************ 00:03:52.561 START TEST json_config 00:03:52.561 ************************************ 00:03:52.561 19:44:43 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:52.561 19:44:43 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:52.561 19:44:43 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:52.561 19:44:43 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:52.820 19:44:43 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:52.820 19:44:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.820 19:44:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.820 19:44:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.820 19:44:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.820 19:44:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.820 19:44:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.820 19:44:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.820 19:44:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.820 19:44:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.820 19:44:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.820 19:44:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.820 19:44:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:52.820 19:44:43 json_config -- scripts/common.sh@345 -- # : 1 00:03:52.820 19:44:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.820 19:44:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.820 19:44:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:52.820 19:44:43 json_config -- scripts/common.sh@353 -- # local d=1 00:03:52.820 19:44:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.820 19:44:43 json_config -- scripts/common.sh@355 -- # echo 1 00:03:52.820 19:44:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.820 19:44:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:52.820 19:44:43 json_config -- scripts/common.sh@353 -- # local d=2 00:03:52.820 19:44:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.820 19:44:43 json_config -- scripts/common.sh@355 -- # echo 2 00:03:52.820 19:44:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.820 19:44:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.820 19:44:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.820 19:44:43 json_config -- scripts/common.sh@368 -- # return 0 00:03:52.820 19:44:43 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.820 19:44:43 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:52.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.820 --rc genhtml_branch_coverage=1 00:03:52.820 --rc genhtml_function_coverage=1 00:03:52.820 --rc genhtml_legend=1 00:03:52.820 --rc geninfo_all_blocks=1 00:03:52.820 --rc geninfo_unexecuted_blocks=1 00:03:52.820 00:03:52.820 ' 00:03:52.820 19:44:43 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:52.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.820 --rc genhtml_branch_coverage=1 00:03:52.820 --rc genhtml_function_coverage=1 00:03:52.820 --rc genhtml_legend=1 00:03:52.820 --rc geninfo_all_blocks=1 00:03:52.820 --rc geninfo_unexecuted_blocks=1 00:03:52.820 00:03:52.820 ' 00:03:52.820 19:44:43 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:52.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.820 --rc genhtml_branch_coverage=1 00:03:52.820 --rc genhtml_function_coverage=1 00:03:52.820 --rc genhtml_legend=1 00:03:52.820 --rc geninfo_all_blocks=1 00:03:52.820 --rc geninfo_unexecuted_blocks=1 00:03:52.820 00:03:52.820 ' 00:03:52.820 19:44:43 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:52.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.820 --rc genhtml_branch_coverage=1 00:03:52.820 --rc genhtml_function_coverage=1 00:03:52.820 --rc genhtml_legend=1 00:03:52.820 --rc geninfo_all_blocks=1 00:03:52.820 --rc geninfo_unexecuted_blocks=1 00:03:52.820 00:03:52.820 ' 00:03:52.820 19:44:43 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:52.820 19:44:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c43568d3-4192-481f-9cc6-13b2a52015b5 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c43568d3-4192-481f-9cc6-13b2a52015b5 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:52.821 19:44:43 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:52.821 19:44:43 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:52.821 19:44:43 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:52.821 19:44:43 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:52.821 19:44:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.821 19:44:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.821 19:44:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.821 19:44:43 json_config -- paths/export.sh@5 -- # export PATH 00:03:52.821 19:44:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@51 -- # : 0 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:52.821 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:52.821 19:44:43 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:52.821 19:44:43 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:03:52.821 19:44:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:52.821 19:44:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:52.821 19:44:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:52.821 19:44:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:52.821 WARNING: No tests are enabled so not running JSON configuration tests 00:03:52.821 19:44:43 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:03:52.821 19:44:43 json_config -- json_config/json_config.sh@28 -- # exit 0 00:03:52.821 00:03:52.821 real 0m0.146s 00:03:52.821 user 0m0.098s 00:03:52.821 sys 0m0.054s 00:03:52.821 19:44:43 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.821 19:44:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.821 ************************************ 00:03:52.821 END TEST json_config 00:03:52.821 ************************************ 00:03:52.821 19:44:43 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:52.821 19:44:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.821 19:44:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.821 19:44:43 -- common/autotest_common.sh@10 -- # set +x 00:03:52.821 ************************************ 00:03:52.821 START TEST json_config_extra_key 00:03:52.821 ************************************ 00:03:52.821 19:44:43 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:52.821 19:44:43 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:52.821 19:44:43 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:52.821 19:44:43 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:03:52.821 19:44:43 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.821 19:44:43 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:52.821 19:44:43 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.821 19:44:43 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:52.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.821 --rc genhtml_branch_coverage=1 00:03:52.821 --rc genhtml_function_coverage=1 00:03:52.821 --rc genhtml_legend=1 00:03:52.821 --rc geninfo_all_blocks=1 00:03:52.821 --rc geninfo_unexecuted_blocks=1 00:03:52.821 00:03:52.821 ' 00:03:52.821 19:44:43 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:52.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.821 --rc genhtml_branch_coverage=1 00:03:52.821 --rc genhtml_function_coverage=1 00:03:52.821 --rc genhtml_legend=1 00:03:52.821 --rc geninfo_all_blocks=1 00:03:52.821 --rc geninfo_unexecuted_blocks=1 00:03:52.821 00:03:52.821 ' 00:03:52.821 19:44:43 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:52.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.821 --rc genhtml_branch_coverage=1 00:03:52.821 --rc genhtml_function_coverage=1 00:03:52.821 --rc genhtml_legend=1 00:03:52.821 --rc geninfo_all_blocks=1 00:03:52.821 --rc geninfo_unexecuted_blocks=1 00:03:52.821 00:03:52.821 ' 00:03:52.821 19:44:43 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:52.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.821 --rc genhtml_branch_coverage=1 00:03:52.821 --rc genhtml_function_coverage=1 00:03:52.821 --rc genhtml_legend=1 00:03:52.821 --rc geninfo_all_blocks=1 00:03:52.821 --rc geninfo_unexecuted_blocks=1 00:03:52.821 00:03:52.821 ' 00:03:52.821 19:44:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:52.821 19:44:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:53.079 19:44:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:53.079 19:44:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:53.079 19:44:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:53.079 19:44:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:53.079 19:44:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:53.079 19:44:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:53.079 19:44:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:53.079 19:44:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:53.079 19:44:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:53.079 19:44:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:53.079 19:44:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c43568d3-4192-481f-9cc6-13b2a52015b5 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c43568d3-4192-481f-9cc6-13b2a52015b5 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:53.080 19:44:43 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:53.080 19:44:43 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:53.080 19:44:43 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:53.080 19:44:43 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:53.080 19:44:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.080 19:44:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.080 19:44:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.080 19:44:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:53.080 19:44:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:53.080 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:53.080 19:44:43 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:53.080 19:44:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:03:53.080 19:44:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:53.080 19:44:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:53.080 19:44:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:53.080 19:44:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:53.080 19:44:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:53.080 19:44:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:53.080 19:44:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:03:53.080 19:44:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:53.080 19:44:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:53.080 INFO: launching applications... 00:03:53.080 19:44:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:53.080 19:44:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:53.080 19:44:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:53.080 19:44:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:53.080 19:44:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:53.080 19:44:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:53.080 19:44:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:53.080 19:44:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:53.080 19:44:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:53.080 19:44:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=56828 00:03:53.080 Waiting for target to run... 00:03:53.080 19:44:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:53.080 19:44:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 56828 /var/tmp/spdk_tgt.sock 00:03:53.080 19:44:43 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 56828 ']' 00:03:53.080 19:44:43 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:53.080 19:44:43 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:53.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:53.080 19:44:43 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:53.080 19:44:43 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:53.080 19:44:43 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:53.080 19:44:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:53.080 [2024-11-26 19:44:43.853334] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:03:53.080 [2024-11-26 19:44:43.853491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56828 ] 00:03:53.337 [2024-11-26 19:44:44.174056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.595 [2024-11-26 19:44:44.282301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.206 19:44:44 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:54.206 19:44:44 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:54.206 00:03:54.206 19:44:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:54.206 INFO: shutting down applications... 00:03:54.206 19:44:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:54.206 19:44:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:54.206 19:44:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:54.206 19:44:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:54.206 19:44:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 56828 ]] 00:03:54.206 19:44:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 56828 00:03:54.206 19:44:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:54.206 19:44:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:54.206 19:44:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56828 00:03:54.206 19:44:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:54.490 19:44:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:54.490 19:44:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:54.490 19:44:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56828 00:03:54.490 19:44:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:55.055 19:44:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:55.055 19:44:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:55.055 19:44:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56828 00:03:55.055 19:44:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:55.621 19:44:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:55.621 19:44:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:55.621 19:44:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56828 00:03:55.621 19:44:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:56.189 19:44:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:56.189 19:44:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:56.189 19:44:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56828 00:03:56.189 19:44:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:56.189 19:44:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:56.189 19:44:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:56.189 SPDK target shutdown done 00:03:56.189 19:44:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:56.189 Success 00:03:56.189 19:44:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:56.189 00:03:56.189 real 0m3.195s 00:03:56.189 user 0m2.887s 00:03:56.189 sys 0m0.448s 00:03:56.189 19:44:46 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.189 19:44:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:56.189 ************************************ 00:03:56.189 END TEST json_config_extra_key 00:03:56.189 ************************************ 00:03:56.189 19:44:46 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:56.189 19:44:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.189 19:44:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.189 19:44:46 -- common/autotest_common.sh@10 -- # set +x 00:03:56.189 ************************************ 00:03:56.189 START TEST alias_rpc 00:03:56.189 ************************************ 00:03:56.189 19:44:46 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:56.189 * Looking for test storage... 00:03:56.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:03:56.189 19:44:46 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:56.189 19:44:46 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:56.189 19:44:46 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:56.189 19:44:47 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.189 19:44:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:56.189 19:44:47 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.189 19:44:47 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:56.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.189 --rc genhtml_branch_coverage=1 00:03:56.189 --rc genhtml_function_coverage=1 00:03:56.189 --rc genhtml_legend=1 00:03:56.189 --rc geninfo_all_blocks=1 00:03:56.189 --rc geninfo_unexecuted_blocks=1 00:03:56.189 00:03:56.189 ' 00:03:56.189 19:44:47 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:56.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.189 --rc genhtml_branch_coverage=1 00:03:56.189 --rc genhtml_function_coverage=1 00:03:56.189 --rc genhtml_legend=1 00:03:56.189 --rc geninfo_all_blocks=1 00:03:56.189 --rc geninfo_unexecuted_blocks=1 00:03:56.189 00:03:56.189 ' 00:03:56.189 19:44:47 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:56.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.190 --rc genhtml_branch_coverage=1 00:03:56.190 --rc genhtml_function_coverage=1 00:03:56.190 --rc genhtml_legend=1 00:03:56.190 --rc geninfo_all_blocks=1 00:03:56.190 --rc geninfo_unexecuted_blocks=1 00:03:56.190 00:03:56.190 ' 00:03:56.190 19:44:47 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:56.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.190 --rc genhtml_branch_coverage=1 00:03:56.190 --rc genhtml_function_coverage=1 00:03:56.190 --rc genhtml_legend=1 00:03:56.190 --rc geninfo_all_blocks=1 00:03:56.190 --rc geninfo_unexecuted_blocks=1 00:03:56.190 00:03:56.190 ' 00:03:56.190 19:44:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:56.190 19:44:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:56.190 19:44:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56925 00:03:56.190 19:44:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56925 00:03:56.190 19:44:47 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 56925 ']' 00:03:56.190 19:44:47 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.190 19:44:47 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:56.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.190 19:44:47 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.190 19:44:47 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:56.190 19:44:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.190 [2024-11-26 19:44:47.110744] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:03:56.190 [2024-11-26 19:44:47.110887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56925 ] 00:03:56.448 [2024-11-26 19:44:47.271279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.714 [2024-11-26 19:44:47.396833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.280 19:44:48 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:57.280 19:44:48 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:57.280 19:44:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:03:57.537 19:44:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56925 00:03:57.537 19:44:48 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 56925 ']' 00:03:57.537 19:44:48 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 56925 00:03:57.537 19:44:48 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:03:57.537 19:44:48 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.537 19:44:48 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56925 00:03:57.537 19:44:48 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.537 19:44:48 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.537 killing process with pid 56925 00:03:57.537 19:44:48 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56925' 00:03:57.537 19:44:48 alias_rpc -- common/autotest_common.sh@973 -- # kill 56925 00:03:57.537 19:44:48 alias_rpc -- common/autotest_common.sh@978 -- # wait 56925 00:03:59.436 00:03:59.436 real 0m3.054s 00:03:59.436 user 0m3.089s 00:03:59.436 sys 0m0.465s 00:03:59.436 19:44:49 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.436 19:44:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.436 ************************************ 00:03:59.436 END TEST alias_rpc 00:03:59.436 ************************************ 00:03:59.436 19:44:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:59.436 19:44:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:03:59.436 19:44:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.436 19:44:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.436 19:44:49 -- common/autotest_common.sh@10 -- # set +x 00:03:59.436 ************************************ 00:03:59.436 START TEST spdkcli_tcp 00:03:59.436 ************************************ 00:03:59.436 19:44:49 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:03:59.436 * Looking for test storage... 00:03:59.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.436 19:44:50 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:59.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.436 --rc genhtml_branch_coverage=1 00:03:59.436 --rc genhtml_function_coverage=1 00:03:59.436 --rc genhtml_legend=1 00:03:59.436 --rc geninfo_all_blocks=1 00:03:59.436 --rc geninfo_unexecuted_blocks=1 00:03:59.436 00:03:59.436 ' 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:59.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.436 --rc genhtml_branch_coverage=1 00:03:59.436 --rc genhtml_function_coverage=1 00:03:59.436 --rc genhtml_legend=1 00:03:59.436 --rc geninfo_all_blocks=1 00:03:59.436 --rc geninfo_unexecuted_blocks=1 00:03:59.436 00:03:59.436 ' 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:59.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.436 --rc genhtml_branch_coverage=1 00:03:59.436 --rc genhtml_function_coverage=1 00:03:59.436 --rc genhtml_legend=1 00:03:59.436 --rc geninfo_all_blocks=1 00:03:59.436 --rc geninfo_unexecuted_blocks=1 00:03:59.436 00:03:59.436 ' 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:59.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.436 --rc genhtml_branch_coverage=1 00:03:59.436 --rc genhtml_function_coverage=1 00:03:59.436 --rc genhtml_legend=1 00:03:59.436 --rc geninfo_all_blocks=1 00:03:59.436 --rc geninfo_unexecuted_blocks=1 00:03:59.436 00:03:59.436 ' 00:03:59.436 19:44:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:03:59.436 19:44:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:03:59.436 19:44:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:03:59.436 19:44:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:59.436 19:44:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:59.436 19:44:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:59.436 19:44:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:59.436 19:44:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57019 00:03:59.436 19:44:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57019 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57019 ']' 00:03:59.436 19:44:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:59.436 19:44:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:59.436 [2024-11-26 19:44:50.191962] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:03:59.436 [2024-11-26 19:44:50.192089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57019 ] 00:03:59.436 [2024-11-26 19:44:50.351055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:59.694 [2024-11-26 19:44:50.500917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:59.694 [2024-11-26 19:44:50.500919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.258 19:44:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.258 19:44:51 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:00.258 19:44:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57038 00:04:00.258 19:44:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:00.258 19:44:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:00.515 [ 00:04:00.515 "bdev_malloc_delete", 00:04:00.515 "bdev_malloc_create", 00:04:00.515 "bdev_null_resize", 00:04:00.515 "bdev_null_delete", 00:04:00.515 "bdev_null_create", 00:04:00.515 "bdev_nvme_cuse_unregister", 00:04:00.515 "bdev_nvme_cuse_register", 00:04:00.515 "bdev_opal_new_user", 00:04:00.515 "bdev_opal_set_lock_state", 00:04:00.515 "bdev_opal_delete", 00:04:00.515 "bdev_opal_get_info", 00:04:00.515 "bdev_opal_create", 00:04:00.515 "bdev_nvme_opal_revert", 00:04:00.515 "bdev_nvme_opal_init", 00:04:00.515 "bdev_nvme_send_cmd", 00:04:00.515 "bdev_nvme_set_keys", 00:04:00.515 "bdev_nvme_get_path_iostat", 00:04:00.515 "bdev_nvme_get_mdns_discovery_info", 00:04:00.515 "bdev_nvme_stop_mdns_discovery", 00:04:00.515 "bdev_nvme_start_mdns_discovery", 00:04:00.515 "bdev_nvme_set_multipath_policy", 00:04:00.515 "bdev_nvme_set_preferred_path", 00:04:00.515 "bdev_nvme_get_io_paths", 00:04:00.515 "bdev_nvme_remove_error_injection", 00:04:00.515 "bdev_nvme_add_error_injection", 00:04:00.515 "bdev_nvme_get_discovery_info", 00:04:00.515 "bdev_nvme_stop_discovery", 00:04:00.515 "bdev_nvme_start_discovery", 00:04:00.515 "bdev_nvme_get_controller_health_info", 00:04:00.515 "bdev_nvme_disable_controller", 00:04:00.515 "bdev_nvme_enable_controller", 00:04:00.515 "bdev_nvme_reset_controller", 00:04:00.515 "bdev_nvme_get_transport_statistics", 00:04:00.515 "bdev_nvme_apply_firmware", 00:04:00.515 "bdev_nvme_detach_controller", 00:04:00.515 "bdev_nvme_get_controllers", 00:04:00.515 "bdev_nvme_attach_controller", 00:04:00.515 "bdev_nvme_set_hotplug", 00:04:00.515 "bdev_nvme_set_options", 00:04:00.515 "bdev_passthru_delete", 00:04:00.515 "bdev_passthru_create", 00:04:00.515 "bdev_lvol_set_parent_bdev", 00:04:00.515 "bdev_lvol_set_parent", 00:04:00.515 "bdev_lvol_check_shallow_copy", 00:04:00.515 "bdev_lvol_start_shallow_copy", 00:04:00.515 "bdev_lvol_grow_lvstore", 00:04:00.515 "bdev_lvol_get_lvols", 00:04:00.515 "bdev_lvol_get_lvstores", 00:04:00.515 "bdev_lvol_delete", 00:04:00.515 "bdev_lvol_set_read_only", 00:04:00.515 "bdev_lvol_resize", 00:04:00.515 "bdev_lvol_decouple_parent", 00:04:00.515 "bdev_lvol_inflate", 00:04:00.515 "bdev_lvol_rename", 00:04:00.515 "bdev_lvol_clone_bdev", 00:04:00.515 "bdev_lvol_clone", 00:04:00.515 "bdev_lvol_snapshot", 00:04:00.515 "bdev_lvol_create", 00:04:00.515 "bdev_lvol_delete_lvstore", 00:04:00.515 "bdev_lvol_rename_lvstore", 00:04:00.515 "bdev_lvol_create_lvstore", 00:04:00.515 "bdev_raid_set_options", 00:04:00.515 "bdev_raid_remove_base_bdev", 00:04:00.515 "bdev_raid_add_base_bdev", 00:04:00.515 "bdev_raid_delete", 00:04:00.515 "bdev_raid_create", 00:04:00.515 "bdev_raid_get_bdevs", 00:04:00.515 "bdev_error_inject_error", 00:04:00.515 "bdev_error_delete", 00:04:00.515 "bdev_error_create", 00:04:00.515 "bdev_split_delete", 00:04:00.515 "bdev_split_create", 00:04:00.515 "bdev_delay_delete", 00:04:00.515 "bdev_delay_create", 00:04:00.515 "bdev_delay_update_latency", 00:04:00.515 "bdev_zone_block_delete", 00:04:00.515 "bdev_zone_block_create", 00:04:00.515 "blobfs_create", 00:04:00.515 "blobfs_detect", 00:04:00.515 "blobfs_set_cache_size", 00:04:00.515 "bdev_aio_delete", 00:04:00.515 "bdev_aio_rescan", 00:04:00.515 "bdev_aio_create", 00:04:00.515 "bdev_ftl_set_property", 00:04:00.515 "bdev_ftl_get_properties", 00:04:00.515 "bdev_ftl_get_stats", 00:04:00.515 "bdev_ftl_unmap", 00:04:00.516 "bdev_ftl_unload", 00:04:00.516 "bdev_ftl_delete", 00:04:00.516 "bdev_ftl_load", 00:04:00.516 "bdev_ftl_create", 00:04:00.516 "bdev_virtio_attach_controller", 00:04:00.516 "bdev_virtio_scsi_get_devices", 00:04:00.516 "bdev_virtio_detach_controller", 00:04:00.516 "bdev_virtio_blk_set_hotplug", 00:04:00.516 "bdev_iscsi_delete", 00:04:00.516 "bdev_iscsi_create", 00:04:00.516 "bdev_iscsi_set_options", 00:04:00.516 "accel_error_inject_error", 00:04:00.516 "ioat_scan_accel_module", 00:04:00.516 "dsa_scan_accel_module", 00:04:00.516 "iaa_scan_accel_module", 00:04:00.516 "keyring_file_remove_key", 00:04:00.516 "keyring_file_add_key", 00:04:00.516 "keyring_linux_set_options", 00:04:00.516 "fsdev_aio_delete", 00:04:00.516 "fsdev_aio_create", 00:04:00.516 "iscsi_get_histogram", 00:04:00.516 "iscsi_enable_histogram", 00:04:00.516 "iscsi_set_options", 00:04:00.516 "iscsi_get_auth_groups", 00:04:00.516 "iscsi_auth_group_remove_secret", 00:04:00.516 "iscsi_auth_group_add_secret", 00:04:00.516 "iscsi_delete_auth_group", 00:04:00.516 "iscsi_create_auth_group", 00:04:00.516 "iscsi_set_discovery_auth", 00:04:00.516 "iscsi_get_options", 00:04:00.516 "iscsi_target_node_request_logout", 00:04:00.516 "iscsi_target_node_set_redirect", 00:04:00.516 "iscsi_target_node_set_auth", 00:04:00.516 "iscsi_target_node_add_lun", 00:04:00.516 "iscsi_get_stats", 00:04:00.516 "iscsi_get_connections", 00:04:00.516 "iscsi_portal_group_set_auth", 00:04:00.516 "iscsi_start_portal_group", 00:04:00.516 "iscsi_delete_portal_group", 00:04:00.516 "iscsi_create_portal_group", 00:04:00.516 "iscsi_get_portal_groups", 00:04:00.516 "iscsi_delete_target_node", 00:04:00.516 "iscsi_target_node_remove_pg_ig_maps", 00:04:00.516 "iscsi_target_node_add_pg_ig_maps", 00:04:00.516 "iscsi_create_target_node", 00:04:00.516 "iscsi_get_target_nodes", 00:04:00.516 "iscsi_delete_initiator_group", 00:04:00.516 "iscsi_initiator_group_remove_initiators", 00:04:00.516 "iscsi_initiator_group_add_initiators", 00:04:00.516 "iscsi_create_initiator_group", 00:04:00.516 "iscsi_get_initiator_groups", 00:04:00.516 "nvmf_set_crdt", 00:04:00.516 "nvmf_set_config", 00:04:00.516 "nvmf_set_max_subsystems", 00:04:00.516 "nvmf_stop_mdns_prr", 00:04:00.516 "nvmf_publish_mdns_prr", 00:04:00.516 "nvmf_subsystem_get_listeners", 00:04:00.516 "nvmf_subsystem_get_qpairs", 00:04:00.516 "nvmf_subsystem_get_controllers", 00:04:00.516 "nvmf_get_stats", 00:04:00.516 "nvmf_get_transports", 00:04:00.516 "nvmf_create_transport", 00:04:00.516 "nvmf_get_targets", 00:04:00.516 "nvmf_delete_target", 00:04:00.516 "nvmf_create_target", 00:04:00.516 "nvmf_subsystem_allow_any_host", 00:04:00.516 "nvmf_subsystem_set_keys", 00:04:00.516 "nvmf_subsystem_remove_host", 00:04:00.516 "nvmf_subsystem_add_host", 00:04:00.516 "nvmf_ns_remove_host", 00:04:00.516 "nvmf_ns_add_host", 00:04:00.516 "nvmf_subsystem_remove_ns", 00:04:00.516 "nvmf_subsystem_set_ns_ana_group", 00:04:00.516 "nvmf_subsystem_add_ns", 00:04:00.516 "nvmf_subsystem_listener_set_ana_state", 00:04:00.516 "nvmf_discovery_get_referrals", 00:04:00.516 "nvmf_discovery_remove_referral", 00:04:00.516 "nvmf_discovery_add_referral", 00:04:00.516 "nvmf_subsystem_remove_listener", 00:04:00.516 "nvmf_subsystem_add_listener", 00:04:00.516 "nvmf_delete_subsystem", 00:04:00.516 "nvmf_create_subsystem", 00:04:00.516 "nvmf_get_subsystems", 00:04:00.516 "env_dpdk_get_mem_stats", 00:04:00.516 "nbd_get_disks", 00:04:00.516 "nbd_stop_disk", 00:04:00.516 "nbd_start_disk", 00:04:00.516 "ublk_recover_disk", 00:04:00.516 "ublk_get_disks", 00:04:00.516 "ublk_stop_disk", 00:04:00.516 "ublk_start_disk", 00:04:00.516 "ublk_destroy_target", 00:04:00.516 "ublk_create_target", 00:04:00.516 "virtio_blk_create_transport", 00:04:00.516 "virtio_blk_get_transports", 00:04:00.516 "vhost_controller_set_coalescing", 00:04:00.516 "vhost_get_controllers", 00:04:00.516 "vhost_delete_controller", 00:04:00.516 "vhost_create_blk_controller", 00:04:00.516 "vhost_scsi_controller_remove_target", 00:04:00.516 "vhost_scsi_controller_add_target", 00:04:00.516 "vhost_start_scsi_controller", 00:04:00.516 "vhost_create_scsi_controller", 00:04:00.516 "thread_set_cpumask", 00:04:00.516 "scheduler_set_options", 00:04:00.516 "framework_get_governor", 00:04:00.516 "framework_get_scheduler", 00:04:00.516 "framework_set_scheduler", 00:04:00.516 "framework_get_reactors", 00:04:00.516 "thread_get_io_channels", 00:04:00.516 "thread_get_pollers", 00:04:00.516 "thread_get_stats", 00:04:00.516 "framework_monitor_context_switch", 00:04:00.516 "spdk_kill_instance", 00:04:00.516 "log_enable_timestamps", 00:04:00.516 "log_get_flags", 00:04:00.516 "log_clear_flag", 00:04:00.516 "log_set_flag", 00:04:00.516 "log_get_level", 00:04:00.516 "log_set_level", 00:04:00.516 "log_get_print_level", 00:04:00.516 "log_set_print_level", 00:04:00.516 "framework_enable_cpumask_locks", 00:04:00.516 "framework_disable_cpumask_locks", 00:04:00.516 "framework_wait_init", 00:04:00.516 "framework_start_init", 00:04:00.516 "scsi_get_devices", 00:04:00.516 "bdev_get_histogram", 00:04:00.516 "bdev_enable_histogram", 00:04:00.516 "bdev_set_qos_limit", 00:04:00.516 "bdev_set_qd_sampling_period", 00:04:00.516 "bdev_get_bdevs", 00:04:00.516 "bdev_reset_iostat", 00:04:00.516 "bdev_get_iostat", 00:04:00.516 "bdev_examine", 00:04:00.516 "bdev_wait_for_examine", 00:04:00.516 "bdev_set_options", 00:04:00.516 "accel_get_stats", 00:04:00.516 "accel_set_options", 00:04:00.516 "accel_set_driver", 00:04:00.516 "accel_crypto_key_destroy", 00:04:00.516 "accel_crypto_keys_get", 00:04:00.516 "accel_crypto_key_create", 00:04:00.516 "accel_assign_opc", 00:04:00.516 "accel_get_module_info", 00:04:00.516 "accel_get_opc_assignments", 00:04:00.516 "vmd_rescan", 00:04:00.516 "vmd_remove_device", 00:04:00.516 "vmd_enable", 00:04:00.516 "sock_get_default_impl", 00:04:00.516 "sock_set_default_impl", 00:04:00.516 "sock_impl_set_options", 00:04:00.516 "sock_impl_get_options", 00:04:00.516 "iobuf_get_stats", 00:04:00.516 "iobuf_set_options", 00:04:00.516 "keyring_get_keys", 00:04:00.516 "framework_get_pci_devices", 00:04:00.516 "framework_get_config", 00:04:00.516 "framework_get_subsystems", 00:04:00.516 "fsdev_set_opts", 00:04:00.516 "fsdev_get_opts", 00:04:00.516 "trace_get_info", 00:04:00.516 "trace_get_tpoint_group_mask", 00:04:00.516 "trace_disable_tpoint_group", 00:04:00.516 "trace_enable_tpoint_group", 00:04:00.516 "trace_clear_tpoint_mask", 00:04:00.516 "trace_set_tpoint_mask", 00:04:00.516 "notify_get_notifications", 00:04:00.516 "notify_get_types", 00:04:00.516 "spdk_get_version", 00:04:00.516 "rpc_get_methods" 00:04:00.516 ] 00:04:00.516 19:44:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:00.516 19:44:51 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:00.516 19:44:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:00.516 19:44:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:00.516 19:44:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57019 00:04:00.516 19:44:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57019 ']' 00:04:00.516 19:44:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57019 00:04:00.516 19:44:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:00.516 19:44:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.516 19:44:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57019 00:04:00.516 19:44:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.516 19:44:51 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.516 killing process with pid 57019 00:04:00.516 19:44:51 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57019' 00:04:00.516 19:44:51 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57019 00:04:00.516 19:44:51 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57019 00:04:02.418 ************************************ 00:04:02.418 END TEST spdkcli_tcp 00:04:02.418 ************************************ 00:04:02.418 00:04:02.418 real 0m3.077s 00:04:02.418 user 0m5.474s 00:04:02.418 sys 0m0.466s 00:04:02.418 19:44:53 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.418 19:44:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:02.418 19:44:53 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.418 19:44:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.418 19:44:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.418 19:44:53 -- common/autotest_common.sh@10 -- # set +x 00:04:02.418 ************************************ 00:04:02.418 START TEST dpdk_mem_utility 00:04:02.418 ************************************ 00:04:02.418 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.418 * Looking for test storage... 00:04:02.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:02.418 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.418 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.418 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.418 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.418 19:44:53 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.418 19:44:53 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.418 19:44:53 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.418 19:44:53 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.418 19:44:53 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.419 19:44:53 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:02.419 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.419 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.419 --rc genhtml_branch_coverage=1 00:04:02.419 --rc genhtml_function_coverage=1 00:04:02.419 --rc genhtml_legend=1 00:04:02.419 --rc geninfo_all_blocks=1 00:04:02.419 --rc geninfo_unexecuted_blocks=1 00:04:02.419 00:04:02.419 ' 00:04:02.419 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.419 --rc genhtml_branch_coverage=1 00:04:02.419 --rc genhtml_function_coverage=1 00:04:02.419 --rc genhtml_legend=1 00:04:02.419 --rc geninfo_all_blocks=1 00:04:02.419 --rc geninfo_unexecuted_blocks=1 00:04:02.419 00:04:02.419 ' 00:04:02.419 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.419 --rc genhtml_branch_coverage=1 00:04:02.419 --rc genhtml_function_coverage=1 00:04:02.419 --rc genhtml_legend=1 00:04:02.419 --rc geninfo_all_blocks=1 00:04:02.419 --rc geninfo_unexecuted_blocks=1 00:04:02.419 00:04:02.419 ' 00:04:02.419 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.419 --rc genhtml_branch_coverage=1 00:04:02.419 --rc genhtml_function_coverage=1 00:04:02.419 --rc genhtml_legend=1 00:04:02.419 --rc geninfo_all_blocks=1 00:04:02.419 --rc geninfo_unexecuted_blocks=1 00:04:02.419 00:04:02.419 ' 00:04:02.419 19:44:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:02.419 19:44:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57132 00:04:02.419 19:44:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57132 00:04:02.419 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57132 ']' 00:04:02.419 19:44:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:02.419 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.419 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.419 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.419 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.419 19:44:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:02.419 [2024-11-26 19:44:53.292120] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:02.419 [2024-11-26 19:44:53.292243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57132 ] 00:04:02.677 [2024-11-26 19:44:53.449232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.677 [2024-11-26 19:44:53.568569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.613 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.613 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:03.613 19:44:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:03.613 19:44:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:03.613 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.613 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.613 { 00:04:03.613 "filename": "/tmp/spdk_mem_dump.txt" 00:04:03.613 } 00:04:03.613 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.613 19:44:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:03.613 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:03.613 1 heaps totaling size 824.000000 MiB 00:04:03.613 size: 824.000000 MiB heap id: 0 00:04:03.613 end heaps---------- 00:04:03.613 9 mempools totaling size 603.782043 MiB 00:04:03.613 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:03.613 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:03.613 size: 100.555481 MiB name: bdev_io_57132 00:04:03.613 size: 50.003479 MiB name: msgpool_57132 00:04:03.613 size: 36.509338 MiB name: fsdev_io_57132 00:04:03.613 size: 21.763794 MiB name: PDU_Pool 00:04:03.613 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:03.613 size: 4.133484 MiB name: evtpool_57132 00:04:03.613 size: 0.026123 MiB name: Session_Pool 00:04:03.613 end mempools------- 00:04:03.613 6 memzones totaling size 4.142822 MiB 00:04:03.613 size: 1.000366 MiB name: RG_ring_0_57132 00:04:03.613 size: 1.000366 MiB name: RG_ring_1_57132 00:04:03.613 size: 1.000366 MiB name: RG_ring_4_57132 00:04:03.613 size: 1.000366 MiB name: RG_ring_5_57132 00:04:03.613 size: 0.125366 MiB name: RG_ring_2_57132 00:04:03.613 size: 0.015991 MiB name: RG_ring_3_57132 00:04:03.613 end memzones------- 00:04:03.613 19:44:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:03.613 heap id: 0 total size: 824.000000 MiB number of busy elements: 323 number of free elements: 18 00:04:03.613 list of free elements. size: 16.779419 MiB 00:04:03.613 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:03.613 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:03.613 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:03.613 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:03.613 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:03.613 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:03.613 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:03.613 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:03.613 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:03.613 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:03.613 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:03.613 element at address: 0x20001b400000 with size: 0.560730 MiB 00:04:03.613 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:03.613 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:03.613 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:03.613 element at address: 0x200012c00000 with size: 0.433472 MiB 00:04:03.613 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:03.613 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:03.613 list of standard malloc elements. size: 199.289673 MiB 00:04:03.613 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:03.613 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:03.613 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:03.613 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:03.613 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:03.613 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:03.613 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:03.613 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:03.614 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:03.614 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:03.614 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:03.614 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:03.614 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:03.614 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:03.615 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:03.615 list of memzone associated elements. size: 607.930908 MiB 00:04:03.615 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:03.615 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:03.615 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:03.615 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:03.615 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:03.615 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57132_0 00:04:03.615 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:03.615 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57132_0 00:04:03.615 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:03.615 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57132_0 00:04:03.615 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:03.615 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:03.615 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:03.615 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:03.615 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:03.615 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57132_0 00:04:03.615 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:03.615 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57132 00:04:03.615 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:03.615 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57132 00:04:03.615 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:03.615 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:03.615 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:03.615 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:03.615 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:03.615 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:03.615 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:03.615 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:03.615 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:03.615 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57132 00:04:03.615 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:03.615 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57132 00:04:03.615 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:03.615 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57132 00:04:03.615 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:03.615 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57132 00:04:03.615 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:03.615 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57132 00:04:03.615 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:03.615 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57132 00:04:03.615 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:03.615 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:03.615 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:03.615 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:03.615 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:03.615 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:03.615 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:03.615 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57132 00:04:03.615 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:03.615 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57132 00:04:03.615 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:03.615 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:03.615 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:03.615 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:03.615 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:03.615 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57132 00:04:03.615 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:03.615 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:03.615 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:03.615 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57132 00:04:03.615 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:03.615 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57132 00:04:03.615 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:03.615 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57132 00:04:03.615 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:03.615 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:03.615 19:44:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:03.615 19:44:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57132 00:04:03.615 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57132 ']' 00:04:03.615 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57132 00:04:03.615 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:03.615 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.615 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57132 00:04:03.615 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.615 killing process with pid 57132 00:04:03.615 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.615 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57132' 00:04:03.615 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57132 00:04:03.615 19:44:54 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57132 00:04:05.514 00:04:05.514 real 0m2.901s 00:04:05.514 user 0m2.864s 00:04:05.514 sys 0m0.445s 00:04:05.514 19:44:55 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.514 19:44:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:05.514 ************************************ 00:04:05.514 END TEST dpdk_mem_utility 00:04:05.514 ************************************ 00:04:05.514 19:44:56 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:05.514 19:44:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.514 19:44:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.514 19:44:56 -- common/autotest_common.sh@10 -- # set +x 00:04:05.514 ************************************ 00:04:05.514 START TEST event 00:04:05.514 ************************************ 00:04:05.514 19:44:56 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:05.514 * Looking for test storage... 00:04:05.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:05.514 19:44:56 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:05.514 19:44:56 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:05.514 19:44:56 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:05.514 19:44:56 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:05.514 19:44:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.514 19:44:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.514 19:44:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.514 19:44:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.514 19:44:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.514 19:44:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.514 19:44:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.514 19:44:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.514 19:44:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.514 19:44:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.514 19:44:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.514 19:44:56 event -- scripts/common.sh@344 -- # case "$op" in 00:04:05.514 19:44:56 event -- scripts/common.sh@345 -- # : 1 00:04:05.514 19:44:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.514 19:44:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.514 19:44:56 event -- scripts/common.sh@365 -- # decimal 1 00:04:05.514 19:44:56 event -- scripts/common.sh@353 -- # local d=1 00:04:05.514 19:44:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.514 19:44:56 event -- scripts/common.sh@355 -- # echo 1 00:04:05.514 19:44:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.514 19:44:56 event -- scripts/common.sh@366 -- # decimal 2 00:04:05.514 19:44:56 event -- scripts/common.sh@353 -- # local d=2 00:04:05.514 19:44:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.514 19:44:56 event -- scripts/common.sh@355 -- # echo 2 00:04:05.514 19:44:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.514 19:44:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.514 19:44:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.514 19:44:56 event -- scripts/common.sh@368 -- # return 0 00:04:05.514 19:44:56 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.514 19:44:56 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:05.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.514 --rc genhtml_branch_coverage=1 00:04:05.514 --rc genhtml_function_coverage=1 00:04:05.514 --rc genhtml_legend=1 00:04:05.514 --rc geninfo_all_blocks=1 00:04:05.514 --rc geninfo_unexecuted_blocks=1 00:04:05.514 00:04:05.514 ' 00:04:05.514 19:44:56 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:05.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.514 --rc genhtml_branch_coverage=1 00:04:05.514 --rc genhtml_function_coverage=1 00:04:05.514 --rc genhtml_legend=1 00:04:05.514 --rc geninfo_all_blocks=1 00:04:05.514 --rc geninfo_unexecuted_blocks=1 00:04:05.514 00:04:05.514 ' 00:04:05.514 19:44:56 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:05.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.514 --rc genhtml_branch_coverage=1 00:04:05.514 --rc genhtml_function_coverage=1 00:04:05.514 --rc genhtml_legend=1 00:04:05.514 --rc geninfo_all_blocks=1 00:04:05.514 --rc geninfo_unexecuted_blocks=1 00:04:05.514 00:04:05.514 ' 00:04:05.514 19:44:56 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:05.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.514 --rc genhtml_branch_coverage=1 00:04:05.514 --rc genhtml_function_coverage=1 00:04:05.514 --rc genhtml_legend=1 00:04:05.514 --rc geninfo_all_blocks=1 00:04:05.514 --rc geninfo_unexecuted_blocks=1 00:04:05.514 00:04:05.514 ' 00:04:05.514 19:44:56 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:05.514 19:44:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:05.514 19:44:56 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:05.514 19:44:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:05.514 19:44:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.514 19:44:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:05.514 ************************************ 00:04:05.514 START TEST event_perf 00:04:05.514 ************************************ 00:04:05.514 19:44:56 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:05.514 Running I/O for 1 seconds...[2024-11-26 19:44:56.200038] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:05.515 [2024-11-26 19:44:56.200159] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57225 ] 00:04:05.515 [2024-11-26 19:44:56.362114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:05.775 [2024-11-26 19:44:56.467609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.775 [2024-11-26 19:44:56.467769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:05.775 Running I/O for 1 seconds...[2024-11-26 19:44:56.468444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.775 [2024-11-26 19:44:56.468494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:06.706 00:04:06.706 lcore 0: 154406 00:04:06.706 lcore 1: 154407 00:04:06.706 lcore 2: 154404 00:04:06.706 lcore 3: 154403 00:04:06.706 done. 00:04:06.706 00:04:06.706 real 0m1.469s 00:04:06.706 user 0m4.251s 00:04:06.706 sys 0m0.095s 00:04:06.706 19:44:57 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.706 19:44:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:06.707 ************************************ 00:04:06.707 END TEST event_perf 00:04:06.707 ************************************ 00:04:06.965 19:44:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:06.965 19:44:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:06.965 19:44:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.965 19:44:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:06.965 ************************************ 00:04:06.965 START TEST event_reactor 00:04:06.965 ************************************ 00:04:06.965 19:44:57 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:06.965 [2024-11-26 19:44:57.715465] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:06.965 [2024-11-26 19:44:57.715580] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57264 ] 00:04:06.965 [2024-11-26 19:44:57.874016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.222 [2024-11-26 19:44:57.977916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.594 test_start 00:04:08.594 oneshot 00:04:08.594 tick 100 00:04:08.594 tick 100 00:04:08.594 tick 250 00:04:08.594 tick 100 00:04:08.594 tick 100 00:04:08.594 tick 100 00:04:08.594 tick 250 00:04:08.594 tick 500 00:04:08.594 tick 100 00:04:08.594 tick 100 00:04:08.594 tick 250 00:04:08.594 tick 100 00:04:08.594 tick 100 00:04:08.594 test_end 00:04:08.594 00:04:08.594 real 0m1.432s 00:04:08.594 user 0m1.253s 00:04:08.594 sys 0m0.071s 00:04:08.594 19:44:59 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.594 19:44:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:08.594 ************************************ 00:04:08.594 END TEST event_reactor 00:04:08.594 ************************************ 00:04:08.594 19:44:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:08.594 19:44:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:08.594 19:44:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.594 19:44:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:08.594 ************************************ 00:04:08.594 START TEST event_reactor_perf 00:04:08.594 ************************************ 00:04:08.594 19:44:59 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:08.594 [2024-11-26 19:44:59.185198] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:08.594 [2024-11-26 19:44:59.185310] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57301 ] 00:04:08.594 [2024-11-26 19:44:59.338482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.594 [2024-11-26 19:44:59.439291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.967 test_start 00:04:09.967 test_end 00:04:09.967 Performance: 395954 events per second 00:04:09.967 00:04:09.967 real 0m1.415s 00:04:09.967 user 0m1.249s 00:04:09.967 sys 0m0.059s 00:04:09.967 19:45:00 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.967 19:45:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:09.967 ************************************ 00:04:09.967 END TEST event_reactor_perf 00:04:09.967 ************************************ 00:04:09.967 19:45:00 event -- event/event.sh@49 -- # uname -s 00:04:09.967 19:45:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:09.968 19:45:00 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:09.968 19:45:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.968 19:45:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.968 19:45:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.968 ************************************ 00:04:09.968 START TEST event_scheduler 00:04:09.968 ************************************ 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:09.968 * Looking for test storage... 00:04:09.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.968 19:45:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:09.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.968 --rc genhtml_branch_coverage=1 00:04:09.968 --rc genhtml_function_coverage=1 00:04:09.968 --rc genhtml_legend=1 00:04:09.968 --rc geninfo_all_blocks=1 00:04:09.968 --rc geninfo_unexecuted_blocks=1 00:04:09.968 00:04:09.968 ' 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:09.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.968 --rc genhtml_branch_coverage=1 00:04:09.968 --rc genhtml_function_coverage=1 00:04:09.968 --rc genhtml_legend=1 00:04:09.968 --rc geninfo_all_blocks=1 00:04:09.968 --rc geninfo_unexecuted_blocks=1 00:04:09.968 00:04:09.968 ' 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:09.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.968 --rc genhtml_branch_coverage=1 00:04:09.968 --rc genhtml_function_coverage=1 00:04:09.968 --rc genhtml_legend=1 00:04:09.968 --rc geninfo_all_blocks=1 00:04:09.968 --rc geninfo_unexecuted_blocks=1 00:04:09.968 00:04:09.968 ' 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:09.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.968 --rc genhtml_branch_coverage=1 00:04:09.968 --rc genhtml_function_coverage=1 00:04:09.968 --rc genhtml_legend=1 00:04:09.968 --rc geninfo_all_blocks=1 00:04:09.968 --rc geninfo_unexecuted_blocks=1 00:04:09.968 00:04:09.968 ' 00:04:09.968 19:45:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:09.968 19:45:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57374 00:04:09.968 19:45:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.968 19:45:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57374 00:04:09.968 19:45:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 57374 ']' 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.968 19:45:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:09.968 [2024-11-26 19:45:00.817423] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:09.968 [2024-11-26 19:45:00.817957] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57374 ] 00:04:10.227 [2024-11-26 19:45:00.978263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:10.227 [2024-11-26 19:45:01.099623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.227 [2024-11-26 19:45:01.100118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.227 [2024-11-26 19:45:01.100441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:10.227 [2024-11-26 19:45:01.100601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:10.793 19:45:01 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.793 19:45:01 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:10.793 19:45:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:10.793 19:45:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.793 19:45:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.793 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:10.793 POWER: Cannot set governor of lcore 0 to userspace 00:04:10.793 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:10.793 POWER: Cannot set governor of lcore 0 to performance 00:04:10.793 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:10.793 POWER: Cannot set governor of lcore 0 to userspace 00:04:10.793 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:10.793 POWER: Cannot set governor of lcore 0 to userspace 00:04:10.793 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:10.793 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:10.793 POWER: Unable to set Power Management Environment for lcore 0 00:04:10.793 [2024-11-26 19:45:01.665900] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:10.793 [2024-11-26 19:45:01.665922] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:10.793 [2024-11-26 19:45:01.665931] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:10.793 [2024-11-26 19:45:01.665951] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:10.793 [2024-11-26 19:45:01.665959] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:10.793 [2024-11-26 19:45:01.665968] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:10.793 19:45:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.793 19:45:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:10.793 19:45:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.793 19:45:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.052 [2024-11-26 19:45:01.919679] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:11.052 19:45:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.052 19:45:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:11.052 19:45:01 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.052 19:45:01 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.052 19:45:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.052 ************************************ 00:04:11.052 START TEST scheduler_create_thread 00:04:11.052 ************************************ 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.052 2 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.052 3 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.052 4 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.052 5 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.052 6 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.052 7 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.052 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.310 8 00:04:11.310 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.310 19:45:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:11.310 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.310 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.310 9 00:04:11.310 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.310 19:45:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:11.310 19:45:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.310 10 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.310 19:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.244 ************************************ 00:04:12.244 END TEST scheduler_create_thread 00:04:12.244 ************************************ 00:04:12.244 19:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.244 00:04:12.244 real 0m1.175s 00:04:12.244 user 0m0.012s 00:04:12.244 sys 0m0.006s 00:04:12.244 19:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.244 19:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.244 19:45:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:12.244 19:45:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57374 00:04:12.244 19:45:03 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 57374 ']' 00:04:12.244 19:45:03 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 57374 00:04:12.244 19:45:03 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:12.244 19:45:03 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.244 19:45:03 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57374 00:04:12.244 19:45:03 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:12.244 killing process with pid 57374 00:04:12.244 19:45:03 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:12.244 19:45:03 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57374' 00:04:12.244 19:45:03 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 57374 00:04:12.244 19:45:03 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 57374 00:04:12.810 [2024-11-26 19:45:03.585555] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:13.744 00:04:13.744 real 0m3.768s 00:04:13.744 user 0m6.095s 00:04:13.744 sys 0m0.367s 00:04:13.744 19:45:04 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.744 ************************************ 00:04:13.744 END TEST event_scheduler 00:04:13.744 ************************************ 00:04:13.744 19:45:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:13.744 19:45:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:13.744 19:45:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:13.744 19:45:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.744 19:45:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.744 19:45:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:13.744 ************************************ 00:04:13.744 START TEST app_repeat 00:04:13.744 ************************************ 00:04:13.744 19:45:04 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:13.744 19:45:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.744 19:45:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.744 19:45:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:13.744 19:45:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:13.744 19:45:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:13.744 19:45:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:13.744 19:45:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:13.744 Process app_repeat pid: 57461 00:04:13.744 19:45:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57461 00:04:13.744 19:45:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.744 19:45:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57461' 00:04:13.744 19:45:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:13.744 spdk_app_start Round 0 00:04:13.744 19:45:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:13.744 19:45:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57461 /var/tmp/spdk-nbd.sock 00:04:13.744 19:45:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57461 ']' 00:04:13.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:13.744 19:45:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:13.744 19:45:04 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:13.744 19:45:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.744 19:45:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:13.745 19:45:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.745 19:45:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:13.745 [2024-11-26 19:45:04.482324] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:13.745 [2024-11-26 19:45:04.482465] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57461 ] 00:04:13.745 [2024-11-26 19:45:04.643595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.016 [2024-11-26 19:45:04.759819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.016 [2024-11-26 19:45:04.759921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.610 19:45:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.610 19:45:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:14.610 19:45:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:14.868 Malloc0 00:04:14.868 19:45:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:15.128 Malloc1 00:04:15.128 19:45:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.128 19:45:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:15.387 /dev/nbd0 00:04:15.387 19:45:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:15.387 19:45:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:15.387 19:45:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:15.387 19:45:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:15.387 19:45:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:15.387 19:45:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:15.387 19:45:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:15.387 19:45:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:15.387 19:45:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:15.387 19:45:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:15.387 19:45:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:15.387 1+0 records in 00:04:15.387 1+0 records out 00:04:15.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282429 s, 14.5 MB/s 00:04:15.387 19:45:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:15.387 19:45:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:15.387 19:45:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:15.387 19:45:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:15.387 19:45:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:15.387 19:45:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:15.387 19:45:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.387 19:45:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:15.645 /dev/nbd1 00:04:15.645 19:45:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:15.645 19:45:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:15.645 19:45:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:15.645 19:45:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:15.645 19:45:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:15.645 19:45:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:15.645 19:45:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:15.645 19:45:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:15.645 19:45:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:15.645 19:45:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:15.645 19:45:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:15.645 1+0 records in 00:04:15.645 1+0 records out 00:04:15.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257369 s, 15.9 MB/s 00:04:15.645 19:45:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:15.645 19:45:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:15.645 19:45:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:15.645 19:45:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:15.645 19:45:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:15.645 19:45:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:15.645 19:45:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.645 19:45:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:15.645 19:45:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.645 19:45:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:15.903 { 00:04:15.903 "nbd_device": "/dev/nbd0", 00:04:15.903 "bdev_name": "Malloc0" 00:04:15.903 }, 00:04:15.903 { 00:04:15.903 "nbd_device": "/dev/nbd1", 00:04:15.903 "bdev_name": "Malloc1" 00:04:15.903 } 00:04:15.903 ]' 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:15.903 { 00:04:15.903 "nbd_device": "/dev/nbd0", 00:04:15.903 "bdev_name": "Malloc0" 00:04:15.903 }, 00:04:15.903 { 00:04:15.903 "nbd_device": "/dev/nbd1", 00:04:15.903 "bdev_name": "Malloc1" 00:04:15.903 } 00:04:15.903 ]' 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:15.903 /dev/nbd1' 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:15.903 /dev/nbd1' 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:15.903 256+0 records in 00:04:15.903 256+0 records out 00:04:15.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00694679 s, 151 MB/s 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:15.903 256+0 records in 00:04:15.903 256+0 records out 00:04:15.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174653 s, 60.0 MB/s 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:15.903 256+0 records in 00:04:15.903 256+0 records out 00:04:15.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274226 s, 38.2 MB/s 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:15.903 19:45:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:16.162 19:45:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:16.162 19:45:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:16.162 19:45:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:16.162 19:45:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:16.162 19:45:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:16.162 19:45:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:16.162 19:45:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:16.162 19:45:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:16.162 19:45:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:16.162 19:45:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:16.425 19:45:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:16.425 19:45:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:16.425 19:45:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:16.425 19:45:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:16.425 19:45:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:16.425 19:45:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:16.425 19:45:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:16.425 19:45:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:16.425 19:45:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:16.425 19:45:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.425 19:45:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:16.425 19:45:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:16.425 19:45:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:16.425 19:45:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:16.685 19:45:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:16.685 19:45:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:16.685 19:45:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:16.685 19:45:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:16.685 19:45:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:16.685 19:45:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:16.685 19:45:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:16.685 19:45:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:16.685 19:45:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:16.685 19:45:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:16.943 19:45:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:17.876 [2024-11-26 19:45:08.508863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:17.876 [2024-11-26 19:45:08.618556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.876 [2024-11-26 19:45:08.618694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.876 [2024-11-26 19:45:08.734708] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:17.876 [2024-11-26 19:45:08.734779] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:20.409 spdk_app_start Round 1 00:04:20.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:20.409 19:45:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:20.409 19:45:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:20.409 19:45:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57461 /var/tmp/spdk-nbd.sock 00:04:20.409 19:45:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57461 ']' 00:04:20.409 19:45:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:20.409 19:45:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.409 19:45:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:20.409 19:45:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.409 19:45:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:20.409 19:45:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.409 19:45:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:20.409 19:45:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:20.409 Malloc0 00:04:20.409 19:45:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:20.668 Malloc1 00:04:20.668 19:45:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.668 19:45:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:20.930 /dev/nbd0 00:04:20.930 19:45:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:20.930 19:45:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:20.930 19:45:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:20.930 19:45:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:20.930 19:45:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:20.930 19:45:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:20.930 19:45:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:20.930 19:45:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:20.930 19:45:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:20.930 19:45:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:20.930 19:45:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:20.930 1+0 records in 00:04:20.930 1+0 records out 00:04:20.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371674 s, 11.0 MB/s 00:04:20.930 19:45:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:20.930 19:45:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:20.930 19:45:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:20.930 19:45:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:20.930 19:45:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:20.930 19:45:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:20.930 19:45:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.930 19:45:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:20.930 /dev/nbd1 00:04:21.190 19:45:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:21.190 19:45:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:21.190 19:45:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:21.190 19:45:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:21.190 19:45:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:21.190 19:45:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:21.190 19:45:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:21.190 19:45:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:21.190 19:45:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:21.190 19:45:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:21.190 19:45:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:21.190 1+0 records in 00:04:21.190 1+0 records out 00:04:21.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183211 s, 22.4 MB/s 00:04:21.190 19:45:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:21.190 19:45:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:21.190 19:45:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:21.190 19:45:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:21.190 19:45:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:21.190 19:45:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:21.190 19:45:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.190 19:45:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:21.190 19:45:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.190 19:45:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:21.190 19:45:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:21.190 { 00:04:21.190 "nbd_device": "/dev/nbd0", 00:04:21.190 "bdev_name": "Malloc0" 00:04:21.190 }, 00:04:21.190 { 00:04:21.190 "nbd_device": "/dev/nbd1", 00:04:21.190 "bdev_name": "Malloc1" 00:04:21.190 } 00:04:21.190 ]' 00:04:21.190 19:45:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:21.190 19:45:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:21.190 { 00:04:21.190 "nbd_device": "/dev/nbd0", 00:04:21.190 "bdev_name": "Malloc0" 00:04:21.190 }, 00:04:21.190 { 00:04:21.190 "nbd_device": "/dev/nbd1", 00:04:21.190 "bdev_name": "Malloc1" 00:04:21.190 } 00:04:21.190 ]' 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:21.448 /dev/nbd1' 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:21.448 /dev/nbd1' 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:21.448 256+0 records in 00:04:21.448 256+0 records out 00:04:21.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441388 s, 238 MB/s 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:21.448 256+0 records in 00:04:21.448 256+0 records out 00:04:21.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195711 s, 53.6 MB/s 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:21.448 256+0 records in 00:04:21.448 256+0 records out 00:04:21.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169352 s, 61.9 MB/s 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:21.448 19:45:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:21.706 19:45:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:21.706 19:45:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:21.706 19:45:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:21.706 19:45:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:21.706 19:45:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:21.706 19:45:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:21.706 19:45:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:21.706 19:45:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:21.706 19:45:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:21.706 19:45:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:21.963 19:45:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:21.963 19:45:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:21.963 19:45:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:21.963 19:45:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:21.963 19:45:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:21.964 19:45:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:21.964 19:45:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:21.964 19:45:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:21.964 19:45:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:21.964 19:45:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.964 19:45:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:21.964 19:45:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:21.964 19:45:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:21.964 19:45:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:22.222 19:45:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:22.222 19:45:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:22.222 19:45:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:22.222 19:45:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:22.222 19:45:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:22.222 19:45:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:22.222 19:45:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:22.222 19:45:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:22.222 19:45:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:22.222 19:45:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:22.480 19:45:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:23.128 [2024-11-26 19:45:13.842560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.128 [2024-11-26 19:45:13.942557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.128 [2024-11-26 19:45:13.942664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.128 [2024-11-26 19:45:14.057941] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:23.128 [2024-11-26 19:45:14.058159] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:25.657 spdk_app_start Round 2 00:04:25.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:25.657 19:45:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:25.657 19:45:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:25.657 19:45:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57461 /var/tmp/spdk-nbd.sock 00:04:25.657 19:45:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57461 ']' 00:04:25.657 19:45:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:25.657 19:45:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.657 19:45:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:25.657 19:45:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.657 19:45:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:25.657 19:45:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.657 19:45:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:25.657 19:45:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.915 Malloc0 00:04:25.915 19:45:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.174 Malloc1 00:04:26.174 19:45:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.174 19:45:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:26.174 /dev/nbd0 00:04:26.432 19:45:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:26.432 19:45:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.432 1+0 records in 00:04:26.432 1+0 records out 00:04:26.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252362 s, 16.2 MB/s 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:26.432 19:45:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.432 19:45:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.432 19:45:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:26.432 /dev/nbd1 00:04:26.432 19:45:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:26.432 19:45:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:26.432 19:45:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.432 1+0 records in 00:04:26.432 1+0 records out 00:04:26.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001885 s, 21.7 MB/s 00:04:26.433 19:45:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:26.433 19:45:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:26.433 19:45:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:26.433 19:45:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:26.433 19:45:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:26.433 19:45:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.433 19:45:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:26.690 { 00:04:26.690 "nbd_device": "/dev/nbd0", 00:04:26.690 "bdev_name": "Malloc0" 00:04:26.690 }, 00:04:26.690 { 00:04:26.690 "nbd_device": "/dev/nbd1", 00:04:26.690 "bdev_name": "Malloc1" 00:04:26.690 } 00:04:26.690 ]' 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:26.690 { 00:04:26.690 "nbd_device": "/dev/nbd0", 00:04:26.690 "bdev_name": "Malloc0" 00:04:26.690 }, 00:04:26.690 { 00:04:26.690 "nbd_device": "/dev/nbd1", 00:04:26.690 "bdev_name": "Malloc1" 00:04:26.690 } 00:04:26.690 ]' 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:26.690 /dev/nbd1' 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:26.690 /dev/nbd1' 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:26.690 256+0 records in 00:04:26.690 256+0 records out 00:04:26.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491988 s, 213 MB/s 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.690 19:45:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:26.948 256+0 records in 00:04:26.948 256+0 records out 00:04:26.948 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163712 s, 64.0 MB/s 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:26.948 256+0 records in 00:04:26.948 256+0 records out 00:04:26.948 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202341 s, 51.8 MB/s 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.948 19:45:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:27.206 19:45:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:27.206 19:45:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:27.206 19:45:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:27.206 19:45:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.206 19:45:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.206 19:45:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:27.206 19:45:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.206 19:45:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.206 19:45:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.206 19:45:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:27.464 19:45:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:27.464 19:45:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:27.464 19:45:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:27.464 19:45:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.464 19:45:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.464 19:45:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:27.464 19:45:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.464 19:45:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.464 19:45:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.464 19:45:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.464 19:45:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.722 19:45:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:27.722 19:45:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.722 19:45:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:27.722 19:45:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:27.722 19:45:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.722 19:45:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:27.722 19:45:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:27.722 19:45:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:27.722 19:45:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:27.722 19:45:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:27.722 19:45:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:27.722 19:45:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:27.722 19:45:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:27.980 19:45:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:28.544 [2024-11-26 19:45:19.441034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.855 [2024-11-26 19:45:19.538888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.855 [2024-11-26 19:45:19.539002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.855 [2024-11-26 19:45:19.649789] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:28.855 [2024-11-26 19:45:19.650068] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:31.382 19:45:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57461 /var/tmp/spdk-nbd.sock 00:04:31.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:31.382 19:45:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57461 ']' 00:04:31.382 19:45:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:31.382 19:45:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.382 19:45:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:31.382 19:45:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.382 19:45:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:31.382 19:45:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.382 19:45:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:31.382 19:45:22 event.app_repeat -- event/event.sh@39 -- # killprocess 57461 00:04:31.382 19:45:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 57461 ']' 00:04:31.382 19:45:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 57461 00:04:31.382 19:45:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:31.382 19:45:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.382 19:45:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57461 00:04:31.382 killing process with pid 57461 00:04:31.382 19:45:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.382 19:45:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.382 19:45:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57461' 00:04:31.382 19:45:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 57461 00:04:31.382 19:45:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 57461 00:04:31.951 spdk_app_start is called in Round 0. 00:04:31.951 Shutdown signal received, stop current app iteration 00:04:31.951 Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 reinitialization... 00:04:31.951 spdk_app_start is called in Round 1. 00:04:31.951 Shutdown signal received, stop current app iteration 00:04:31.951 Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 reinitialization... 00:04:31.951 spdk_app_start is called in Round 2. 00:04:31.951 Shutdown signal received, stop current app iteration 00:04:31.951 Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 reinitialization... 00:04:31.951 spdk_app_start is called in Round 3. 00:04:31.951 Shutdown signal received, stop current app iteration 00:04:31.951 ************************************ 00:04:31.951 END TEST app_repeat 00:04:31.951 ************************************ 00:04:31.951 19:45:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:31.951 19:45:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:31.951 00:04:31.951 real 0m18.199s 00:04:31.951 user 0m39.787s 00:04:31.951 sys 0m2.235s 00:04:31.951 19:45:22 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.951 19:45:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:31.951 19:45:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:31.951 19:45:22 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:31.951 19:45:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.951 19:45:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.951 19:45:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.951 ************************************ 00:04:31.951 START TEST cpu_locks 00:04:31.951 ************************************ 00:04:31.951 19:45:22 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:31.951 * Looking for test storage... 00:04:31.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:31.951 19:45:22 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.951 19:45:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.951 19:45:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.951 19:45:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.951 19:45:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:31.951 19:45:22 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.951 19:45:22 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.951 --rc genhtml_branch_coverage=1 00:04:31.951 --rc genhtml_function_coverage=1 00:04:31.951 --rc genhtml_legend=1 00:04:31.951 --rc geninfo_all_blocks=1 00:04:31.951 --rc geninfo_unexecuted_blocks=1 00:04:31.951 00:04:31.951 ' 00:04:31.951 19:45:22 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.951 --rc genhtml_branch_coverage=1 00:04:31.951 --rc genhtml_function_coverage=1 00:04:31.951 --rc genhtml_legend=1 00:04:31.951 --rc geninfo_all_blocks=1 00:04:31.951 --rc geninfo_unexecuted_blocks=1 00:04:31.951 00:04:31.951 ' 00:04:31.951 19:45:22 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.951 --rc genhtml_branch_coverage=1 00:04:31.951 --rc genhtml_function_coverage=1 00:04:31.951 --rc genhtml_legend=1 00:04:31.951 --rc geninfo_all_blocks=1 00:04:31.951 --rc geninfo_unexecuted_blocks=1 00:04:31.951 00:04:31.951 ' 00:04:31.951 19:45:22 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.951 --rc genhtml_branch_coverage=1 00:04:31.951 --rc genhtml_function_coverage=1 00:04:31.951 --rc genhtml_legend=1 00:04:31.951 --rc geninfo_all_blocks=1 00:04:31.951 --rc geninfo_unexecuted_blocks=1 00:04:31.951 00:04:31.951 ' 00:04:31.951 19:45:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:31.951 19:45:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:31.951 19:45:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:31.951 19:45:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:31.951 19:45:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.951 19:45:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.951 19:45:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.951 ************************************ 00:04:31.951 START TEST default_locks 00:04:31.951 ************************************ 00:04:31.951 19:45:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:31.951 19:45:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57897 00:04:31.951 19:45:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 57897 00:04:31.951 19:45:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 57897 ']' 00:04:31.951 19:45:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.951 19:45:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.951 19:45:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.951 19:45:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.952 19:45:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.952 19:45:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.209 [2024-11-26 19:45:22.912509] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:32.209 [2024-11-26 19:45:22.912814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57897 ] 00:04:32.209 [2024-11-26 19:45:23.071755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.467 [2024-11-26 19:45:23.190399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.032 19:45:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.032 19:45:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:33.032 19:45:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 57897 00:04:33.032 19:45:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 57897 00:04:33.032 19:45:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:33.290 19:45:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 57897 00:04:33.290 19:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 57897 ']' 00:04:33.290 19:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 57897 00:04:33.290 19:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:33.290 19:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.290 19:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57897 00:04:33.290 killing process with pid 57897 00:04:33.290 19:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.290 19:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.290 19:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57897' 00:04:33.290 19:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 57897 00:04:33.290 19:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 57897 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57897 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 57897 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:35.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 57897 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 57897 ']' 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.187 ERROR: process (pid: 57897) is no longer running 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.187 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (57897) - No such process 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:35.187 00:04:35.187 real 0m2.897s 00:04:35.187 user 0m2.857s 00:04:35.187 sys 0m0.554s 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.187 19:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.187 ************************************ 00:04:35.187 END TEST default_locks 00:04:35.187 ************************************ 00:04:35.187 19:45:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:35.187 19:45:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.187 19:45:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.187 19:45:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.187 ************************************ 00:04:35.187 START TEST default_locks_via_rpc 00:04:35.187 ************************************ 00:04:35.187 19:45:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:35.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.187 19:45:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57961 00:04:35.187 19:45:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 57961 00:04:35.187 19:45:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 57961 ']' 00:04:35.187 19:45:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.187 19:45:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.187 19:45:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.187 19:45:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.187 19:45:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.187 19:45:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.187 [2024-11-26 19:45:25.854450] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:35.187 [2024-11-26 19:45:25.854582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57961 ] 00:04:35.187 [2024-11-26 19:45:26.013362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.490 [2024-11-26 19:45:26.129725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 57961 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 57961 00:04:36.058 19:45:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.315 19:45:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 57961 00:04:36.315 19:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 57961 ']' 00:04:36.315 19:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 57961 00:04:36.315 19:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:36.315 19:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.315 19:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57961 00:04:36.315 killing process with pid 57961 00:04:36.315 19:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.315 19:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.315 19:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57961' 00:04:36.315 19:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 57961 00:04:36.315 19:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 57961 00:04:37.689 ************************************ 00:04:37.689 END TEST default_locks_via_rpc 00:04:37.689 ************************************ 00:04:37.689 00:04:37.689 real 0m2.821s 00:04:37.689 user 0m2.775s 00:04:37.689 sys 0m0.520s 00:04:37.689 19:45:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.689 19:45:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.947 19:45:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:37.947 19:45:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.948 19:45:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.948 19:45:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.948 ************************************ 00:04:37.948 START TEST non_locking_app_on_locked_coremask 00:04:37.948 ************************************ 00:04:37.948 19:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:37.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.948 19:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58024 00:04:37.948 19:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58024 /var/tmp/spdk.sock 00:04:37.948 19:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58024 ']' 00:04:37.948 19:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.948 19:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.948 19:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.948 19:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.948 19:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.948 19:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.948 [2024-11-26 19:45:28.716353] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:37.948 [2024-11-26 19:45:28.716489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58024 ] 00:04:37.948 [2024-11-26 19:45:28.876413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.206 [2024-11-26 19:45:28.979253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.776 19:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.776 19:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:38.776 19:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58040 00:04:38.776 19:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58040 /var/tmp/spdk2.sock 00:04:38.776 19:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58040 ']' 00:04:38.776 19:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.776 19:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.776 19:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.776 19:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.776 19:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.776 19:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:38.776 [2024-11-26 19:45:29.628932] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:38.776 [2024-11-26 19:45:29.629075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58040 ] 00:04:39.033 [2024-11-26 19:45:29.793796] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:39.033 [2024-11-26 19:45:29.793866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.291 [2024-11-26 19:45:30.000878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.223 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.223 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:40.223 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58024 00:04:40.223 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58024 00:04:40.223 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.481 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58024 00:04:40.481 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58024 ']' 00:04:40.481 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58024 00:04:40.481 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:40.481 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.481 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58024 00:04:40.739 killing process with pid 58024 00:04:40.739 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.739 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.739 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58024' 00:04:40.739 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58024 00:04:40.739 19:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58024 00:04:43.267 19:45:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58040 00:04:43.267 19:45:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58040 ']' 00:04:43.267 19:45:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58040 00:04:43.267 19:45:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:43.267 19:45:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.267 19:45:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58040 00:04:43.267 killing process with pid 58040 00:04:43.267 19:45:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.267 19:45:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.267 19:45:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58040' 00:04:43.267 19:45:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58040 00:04:43.267 19:45:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58040 00:04:44.642 ************************************ 00:04:44.642 END TEST non_locking_app_on_locked_coremask 00:04:44.642 ************************************ 00:04:44.642 00:04:44.642 real 0m6.717s 00:04:44.642 user 0m6.895s 00:04:44.642 sys 0m0.955s 00:04:44.642 19:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.642 19:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.642 19:45:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:44.642 19:45:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.642 19:45:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.642 19:45:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.642 ************************************ 00:04:44.642 START TEST locking_app_on_unlocked_coremask 00:04:44.642 ************************************ 00:04:44.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.642 19:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:44.642 19:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58131 00:04:44.642 19:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58131 /var/tmp/spdk.sock 00:04:44.642 19:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58131 ']' 00:04:44.642 19:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.642 19:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.642 19:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.642 19:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.642 19:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.642 19:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:44.642 [2024-11-26 19:45:35.478397] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:44.642 [2024-11-26 19:45:35.478523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58131 ] 00:04:44.900 [2024-11-26 19:45:35.634237] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:44.900 [2024-11-26 19:45:35.634435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.900 [2024-11-26 19:45:35.737070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.466 19:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.466 19:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:45.466 19:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58147 00:04:45.466 19:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58147 /var/tmp/spdk2.sock 00:04:45.466 19:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:45.466 19:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58147 ']' 00:04:45.466 19:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.466 19:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.466 19:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.466 19:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.466 19:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.724 [2024-11-26 19:45:36.412635] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:45.724 [2024-11-26 19:45:36.412952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58147 ] 00:04:45.724 [2024-11-26 19:45:36.576978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.983 [2024-11-26 19:45:36.783206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.356 19:45:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.356 19:45:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:47.356 19:45:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58147 00:04:47.356 19:45:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58147 00:04:47.356 19:45:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:47.356 19:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58131 00:04:47.356 19:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58131 ']' 00:04:47.356 19:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58131 00:04:47.356 19:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:47.356 19:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.356 19:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58131 00:04:47.356 killing process with pid 58131 00:04:47.356 19:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.356 19:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.356 19:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58131' 00:04:47.356 19:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58131 00:04:47.356 19:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58131 00:04:49.884 19:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58147 00:04:49.884 19:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58147 ']' 00:04:49.884 19:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58147 00:04:49.884 19:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:49.884 19:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.152 19:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58147 00:04:50.152 killing process with pid 58147 00:04:50.152 19:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.152 19:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.153 19:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58147' 00:04:50.153 19:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58147 00:04:50.153 19:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58147 00:04:51.524 ************************************ 00:04:51.524 END TEST locking_app_on_unlocked_coremask 00:04:51.524 ************************************ 00:04:51.524 00:04:51.524 real 0m6.738s 00:04:51.524 user 0m6.943s 00:04:51.524 sys 0m0.926s 00:04:51.524 19:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.525 19:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.525 19:45:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:51.525 19:45:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.525 19:45:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.525 19:45:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.525 ************************************ 00:04:51.525 START TEST locking_app_on_locked_coremask 00:04:51.525 ************************************ 00:04:51.525 19:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:51.525 19:45:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58249 00:04:51.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.525 19:45:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58249 /var/tmp/spdk.sock 00:04:51.525 19:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58249 ']' 00:04:51.525 19:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.525 19:45:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.525 19:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.525 19:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.525 19:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.525 19:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.525 [2024-11-26 19:45:42.262158] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:51.525 [2024-11-26 19:45:42.262296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58249 ] 00:04:51.525 [2024-11-26 19:45:42.421208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.887 [2024-11-26 19:45:42.523639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.169 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.169 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:52.169 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58265 00:04:52.169 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58265 /var/tmp/spdk2.sock 00:04:52.169 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:52.169 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:52.169 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58265 /var/tmp/spdk2.sock 00:04:52.169 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:52.427 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.427 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:52.427 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.427 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58265 /var/tmp/spdk2.sock 00:04:52.427 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58265 ']' 00:04:52.427 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.427 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.427 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.427 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.427 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.427 [2024-11-26 19:45:43.180112] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:52.427 [2024-11-26 19:45:43.180393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58265 ] 00:04:52.427 [2024-11-26 19:45:43.342380] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58249 has claimed it. 00:04:52.427 [2024-11-26 19:45:43.342443] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:52.992 ERROR: process (pid: 58265) is no longer running 00:04:52.992 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58265) - No such process 00:04:52.992 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.992 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:52.992 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:52.992 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.992 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.992 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.992 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58249 00:04:52.992 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58249 00:04:52.992 19:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.250 19:45:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58249 00:04:53.250 19:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58249 ']' 00:04:53.250 19:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58249 00:04:53.250 19:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:53.250 19:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.250 19:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58249 00:04:53.250 killing process with pid 58249 00:04:53.250 19:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.250 19:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.250 19:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58249' 00:04:53.250 19:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58249 00:04:53.250 19:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58249 00:04:54.624 00:04:54.624 real 0m3.160s 00:04:54.624 user 0m3.336s 00:04:54.624 sys 0m0.618s 00:04:54.624 19:45:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.624 19:45:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.624 ************************************ 00:04:54.624 END TEST locking_app_on_locked_coremask 00:04:54.624 ************************************ 00:04:54.624 19:45:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:54.624 19:45:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.624 19:45:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.624 19:45:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.624 ************************************ 00:04:54.624 START TEST locking_overlapped_coremask 00:04:54.624 ************************************ 00:04:54.624 19:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:54.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.624 19:45:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58318 00:04:54.624 19:45:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58318 /var/tmp/spdk.sock 00:04:54.624 19:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58318 ']' 00:04:54.624 19:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.624 19:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.624 19:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.624 19:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.624 19:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.624 19:45:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:04:54.624 [2024-11-26 19:45:45.472110] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:54.624 [2024-11-26 19:45:45.472531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58318 ] 00:04:54.882 [2024-11-26 19:45:45.631433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:54.882 [2024-11-26 19:45:45.740153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.882 [2024-11-26 19:45:45.740238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.882 [2024-11-26 19:45:45.740263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58336 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58336 /var/tmp/spdk2.sock 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58336 /var/tmp/spdk2.sock 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58336 /var/tmp/spdk2.sock 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58336 ']' 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.447 19:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.447 [2024-11-26 19:45:46.350316] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:55.447 [2024-11-26 19:45:46.350562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58336 ] 00:04:55.705 [2024-11-26 19:45:46.521968] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58318 has claimed it. 00:04:55.705 [2024-11-26 19:45:46.522047] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:56.272 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58336) - No such process 00:04:56.272 ERROR: process (pid: 58336) is no longer running 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58318 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58318 ']' 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58318 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58318 00:04:56.272 killing process with pid 58318 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58318' 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58318 00:04:56.272 19:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58318 00:04:57.646 00:04:57.646 real 0m2.962s 00:04:57.646 user 0m7.862s 00:04:57.646 sys 0m0.478s 00:04:57.646 19:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.646 ************************************ 00:04:57.646 END TEST locking_overlapped_coremask 00:04:57.646 ************************************ 00:04:57.646 19:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.646 19:45:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:57.646 19:45:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.647 19:45:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.647 19:45:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.647 ************************************ 00:04:57.647 START TEST locking_overlapped_coremask_via_rpc 00:04:57.647 ************************************ 00:04:57.647 19:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:57.647 19:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58389 00:04:57.647 19:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58389 /var/tmp/spdk.sock 00:04:57.647 19:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58389 ']' 00:04:57.647 19:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.647 19:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.647 19:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:57.647 19:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.647 19:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.647 19:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.647 [2024-11-26 19:45:48.474844] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:57.647 [2024-11-26 19:45:48.475527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58389 ] 00:04:57.904 [2024-11-26 19:45:48.652095] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:57.904 [2024-11-26 19:45:48.652357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:57.904 [2024-11-26 19:45:48.762646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.904 [2024-11-26 19:45:48.762875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.904 [2024-11-26 19:45:48.763008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.471 19:45:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.471 19:45:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:58.471 19:45:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58407 00:04:58.471 19:45:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:58.471 19:45:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58407 /var/tmp/spdk2.sock 00:04:58.471 19:45:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58407 ']' 00:04:58.471 19:45:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.471 19:45:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.471 19:45:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.471 19:45:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.471 19:45:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.471 [2024-11-26 19:45:49.401090] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:58.471 [2024-11-26 19:45:49.401396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58407 ] 00:04:58.730 [2024-11-26 19:45:49.576595] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.730 [2024-11-26 19:45:49.576664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.986 [2024-11-26 19:45:49.824430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.986 [2024-11-26 19:45:49.828429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.986 [2024-11-26 19:45:49.828443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.381 [2024-11-26 19:45:51.146557] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58389 has claimed it. 00:05:00.381 request: 00:05:00.381 { 00:05:00.381 "method": "framework_enable_cpumask_locks", 00:05:00.381 "req_id": 1 00:05:00.381 } 00:05:00.381 Got JSON-RPC error response 00:05:00.381 response: 00:05:00.381 { 00:05:00.381 "code": -32603, 00:05:00.381 "message": "Failed to claim CPU core: 2" 00:05:00.381 } 00:05:00.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58389 /var/tmp/spdk.sock 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58389 ']' 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.381 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.637 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.637 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:00.637 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58407 /var/tmp/spdk2.sock 00:05:00.637 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58407 ']' 00:05:00.637 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.637 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.637 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.637 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.637 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.894 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.894 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:00.894 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:00.894 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:00.894 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:00.894 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:00.894 00:05:00.894 real 0m3.207s 00:05:00.894 user 0m1.113s 00:05:00.894 sys 0m0.137s 00:05:00.894 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.894 19:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.894 ************************************ 00:05:00.894 END TEST locking_overlapped_coremask_via_rpc 00:05:00.894 ************************************ 00:05:00.894 19:45:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:00.894 19:45:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58389 ]] 00:05:00.894 19:45:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58389 00:05:00.894 19:45:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58389 ']' 00:05:00.894 19:45:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58389 00:05:00.894 19:45:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:00.894 19:45:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.894 19:45:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58389 00:05:00.894 killing process with pid 58389 00:05:00.894 19:45:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.894 19:45:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.894 19:45:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58389' 00:05:00.894 19:45:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58389 00:05:00.894 19:45:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58389 00:05:02.262 19:45:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58407 ]] 00:05:02.262 19:45:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58407 00:05:02.262 19:45:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58407 ']' 00:05:02.262 19:45:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58407 00:05:02.262 19:45:53 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:02.262 19:45:53 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.262 19:45:53 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58407 00:05:02.262 killing process with pid 58407 00:05:02.262 19:45:53 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:02.262 19:45:53 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:02.262 19:45:53 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58407' 00:05:02.262 19:45:53 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58407 00:05:02.262 19:45:53 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58407 00:05:03.629 19:45:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:03.630 Process with pid 58389 is not found 00:05:03.630 Process with pid 58407 is not found 00:05:03.630 19:45:54 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:03.630 19:45:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58389 ]] 00:05:03.630 19:45:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58389 00:05:03.630 19:45:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58389 ']' 00:05:03.630 19:45:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58389 00:05:03.630 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58389) - No such process 00:05:03.630 19:45:54 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58389 is not found' 00:05:03.630 19:45:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58407 ]] 00:05:03.630 19:45:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58407 00:05:03.630 19:45:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58407 ']' 00:05:03.630 19:45:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58407 00:05:03.630 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58407) - No such process 00:05:03.630 19:45:54 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58407 is not found' 00:05:03.630 19:45:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:03.630 00:05:03.630 real 0m31.674s 00:05:03.630 user 0m54.095s 00:05:03.630 sys 0m5.145s 00:05:03.630 ************************************ 00:05:03.630 END TEST cpu_locks 00:05:03.630 ************************************ 00:05:03.630 19:45:54 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.630 19:45:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.630 ************************************ 00:05:03.630 END TEST event 00:05:03.630 ************************************ 00:05:03.630 00:05:03.630 real 0m58.375s 00:05:03.630 user 1m46.900s 00:05:03.630 sys 0m8.212s 00:05:03.630 19:45:54 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.630 19:45:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.630 19:45:54 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:03.630 19:45:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.630 19:45:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.630 19:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:03.630 ************************************ 00:05:03.630 START TEST thread 00:05:03.630 ************************************ 00:05:03.630 19:45:54 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:03.630 * Looking for test storage... 00:05:03.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:03.630 19:45:54 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.630 19:45:54 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.630 19:45:54 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.888 19:45:54 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.888 19:45:54 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.888 19:45:54 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.888 19:45:54 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.888 19:45:54 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.888 19:45:54 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.888 19:45:54 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.888 19:45:54 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.888 19:45:54 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.888 19:45:54 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.888 19:45:54 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.888 19:45:54 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.888 19:45:54 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:03.888 19:45:54 thread -- scripts/common.sh@345 -- # : 1 00:05:03.888 19:45:54 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.888 19:45:54 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.888 19:45:54 thread -- scripts/common.sh@365 -- # decimal 1 00:05:03.888 19:45:54 thread -- scripts/common.sh@353 -- # local d=1 00:05:03.888 19:45:54 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.888 19:45:54 thread -- scripts/common.sh@355 -- # echo 1 00:05:03.888 19:45:54 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.888 19:45:54 thread -- scripts/common.sh@366 -- # decimal 2 00:05:03.888 19:45:54 thread -- scripts/common.sh@353 -- # local d=2 00:05:03.888 19:45:54 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.888 19:45:54 thread -- scripts/common.sh@355 -- # echo 2 00:05:03.888 19:45:54 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.888 19:45:54 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.888 19:45:54 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.888 19:45:54 thread -- scripts/common.sh@368 -- # return 0 00:05:03.888 19:45:54 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.888 19:45:54 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.888 --rc genhtml_branch_coverage=1 00:05:03.888 --rc genhtml_function_coverage=1 00:05:03.888 --rc genhtml_legend=1 00:05:03.888 --rc geninfo_all_blocks=1 00:05:03.888 --rc geninfo_unexecuted_blocks=1 00:05:03.888 00:05:03.888 ' 00:05:03.888 19:45:54 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.888 --rc genhtml_branch_coverage=1 00:05:03.888 --rc genhtml_function_coverage=1 00:05:03.888 --rc genhtml_legend=1 00:05:03.888 --rc geninfo_all_blocks=1 00:05:03.888 --rc geninfo_unexecuted_blocks=1 00:05:03.888 00:05:03.888 ' 00:05:03.888 19:45:54 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.888 --rc genhtml_branch_coverage=1 00:05:03.888 --rc genhtml_function_coverage=1 00:05:03.888 --rc genhtml_legend=1 00:05:03.888 --rc geninfo_all_blocks=1 00:05:03.888 --rc geninfo_unexecuted_blocks=1 00:05:03.888 00:05:03.888 ' 00:05:03.888 19:45:54 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.888 --rc genhtml_branch_coverage=1 00:05:03.888 --rc genhtml_function_coverage=1 00:05:03.888 --rc genhtml_legend=1 00:05:03.888 --rc geninfo_all_blocks=1 00:05:03.888 --rc geninfo_unexecuted_blocks=1 00:05:03.888 00:05:03.888 ' 00:05:03.888 19:45:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:03.888 19:45:54 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:03.888 19:45:54 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.888 19:45:54 thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.888 ************************************ 00:05:03.888 START TEST thread_poller_perf 00:05:03.888 ************************************ 00:05:03.888 19:45:54 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:03.888 [2024-11-26 19:45:54.659627] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:03.888 [2024-11-26 19:45:54.659904] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58567 ] 00:05:03.888 [2024-11-26 19:45:54.817282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.146 [2024-11-26 19:45:54.925281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.146 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:05.519 [2024-11-26T19:45:56.456Z] ====================================== 00:05:05.519 [2024-11-26T19:45:56.456Z] busy:2606761838 (cyc) 00:05:05.519 [2024-11-26T19:45:56.456Z] total_run_count: 388000 00:05:05.519 [2024-11-26T19:45:56.456Z] tsc_hz: 2600000000 (cyc) 00:05:05.519 [2024-11-26T19:45:56.456Z] ====================================== 00:05:05.519 [2024-11-26T19:45:56.456Z] poller_cost: 6718 (cyc), 2583 (nsec) 00:05:05.519 ************************************ 00:05:05.519 END TEST thread_poller_perf 00:05:05.519 ************************************ 00:05:05.519 00:05:05.519 real 0m1.436s 00:05:05.519 user 0m1.257s 00:05:05.519 sys 0m0.072s 00:05:05.519 19:45:56 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.519 19:45:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.519 19:45:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:05.519 19:45:56 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:05.519 19:45:56 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.519 19:45:56 thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.519 ************************************ 00:05:05.519 START TEST thread_poller_perf 00:05:05.519 ************************************ 00:05:05.519 19:45:56 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:05.519 [2024-11-26 19:45:56.140676] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:05.519 [2024-11-26 19:45:56.140808] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58604 ] 00:05:05.519 [2024-11-26 19:45:56.297079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.519 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:05.519 [2024-11-26 19:45:56.403822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.893 [2024-11-26T19:45:57.831Z] ====================================== 00:05:06.894 [2024-11-26T19:45:57.831Z] busy:2602612362 (cyc) 00:05:06.894 [2024-11-26T19:45:57.831Z] total_run_count: 5064000 00:05:06.894 [2024-11-26T19:45:57.831Z] tsc_hz: 2600000000 (cyc) 00:05:06.894 [2024-11-26T19:45:57.831Z] ====================================== 00:05:06.894 [2024-11-26T19:45:57.831Z] poller_cost: 513 (cyc), 197 (nsec) 00:05:06.894 ************************************ 00:05:06.894 END TEST thread_poller_perf 00:05:06.894 ************************************ 00:05:06.894 00:05:06.894 real 0m1.435s 00:05:06.894 user 0m1.250s 00:05:06.894 sys 0m0.077s 00:05:06.894 19:45:57 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.894 19:45:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.894 19:45:57 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:06.894 00:05:06.894 real 0m3.136s 00:05:06.894 user 0m2.638s 00:05:06.894 sys 0m0.274s 00:05:06.894 19:45:57 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.894 ************************************ 00:05:06.894 19:45:57 thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.894 END TEST thread 00:05:06.894 ************************************ 00:05:06.894 19:45:57 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:06.894 19:45:57 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:06.894 19:45:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.894 19:45:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.894 19:45:57 -- common/autotest_common.sh@10 -- # set +x 00:05:06.894 ************************************ 00:05:06.894 START TEST app_cmdline 00:05:06.894 ************************************ 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:06.894 * Looking for test storage... 00:05:06.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.894 19:45:57 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.894 --rc genhtml_branch_coverage=1 00:05:06.894 --rc genhtml_function_coverage=1 00:05:06.894 --rc genhtml_legend=1 00:05:06.894 --rc geninfo_all_blocks=1 00:05:06.894 --rc geninfo_unexecuted_blocks=1 00:05:06.894 00:05:06.894 ' 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.894 --rc genhtml_branch_coverage=1 00:05:06.894 --rc genhtml_function_coverage=1 00:05:06.894 --rc genhtml_legend=1 00:05:06.894 --rc geninfo_all_blocks=1 00:05:06.894 --rc geninfo_unexecuted_blocks=1 00:05:06.894 00:05:06.894 ' 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.894 --rc genhtml_branch_coverage=1 00:05:06.894 --rc genhtml_function_coverage=1 00:05:06.894 --rc genhtml_legend=1 00:05:06.894 --rc geninfo_all_blocks=1 00:05:06.894 --rc geninfo_unexecuted_blocks=1 00:05:06.894 00:05:06.894 ' 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.894 --rc genhtml_branch_coverage=1 00:05:06.894 --rc genhtml_function_coverage=1 00:05:06.894 --rc genhtml_legend=1 00:05:06.894 --rc geninfo_all_blocks=1 00:05:06.894 --rc geninfo_unexecuted_blocks=1 00:05:06.894 00:05:06.894 ' 00:05:06.894 19:45:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:06.894 19:45:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=58687 00:05:06.894 19:45:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 58687 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 58687 ']' 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.894 19:45:57 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:06.894 19:45:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:07.153 [2024-11-26 19:45:57.859685] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:07.153 [2024-11-26 19:45:57.860468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58687 ] 00:05:07.153 [2024-11-26 19:45:58.020071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.413 [2024-11-26 19:45:58.126605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.979 19:45:58 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.979 19:45:58 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:07.979 19:45:58 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:07.979 { 00:05:07.979 "version": "SPDK v25.01-pre git sha1 e43b3b914", 00:05:07.979 "fields": { 00:05:07.979 "major": 25, 00:05:07.979 "minor": 1, 00:05:07.979 "patch": 0, 00:05:07.979 "suffix": "-pre", 00:05:07.979 "commit": "e43b3b914" 00:05:07.979 } 00:05:07.979 } 00:05:07.979 19:45:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:07.979 19:45:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:07.979 19:45:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:07.979 19:45:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:07.979 19:45:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:07.979 19:45:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:07.979 19:45:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:07.979 19:45:58 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.979 19:45:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:08.237 19:45:58 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.237 19:45:58 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:08.237 19:45:58 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:08.237 19:45:58 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.237 19:45:58 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:08.237 19:45:58 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.237 19:45:58 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:08.237 19:45:58 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.237 19:45:58 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:08.237 19:45:58 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.237 19:45:58 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:08.237 19:45:58 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.237 19:45:58 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:08.237 19:45:58 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:08.237 19:45:58 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.237 request: 00:05:08.237 { 00:05:08.237 "method": "env_dpdk_get_mem_stats", 00:05:08.237 "req_id": 1 00:05:08.237 } 00:05:08.237 Got JSON-RPC error response 00:05:08.237 response: 00:05:08.237 { 00:05:08.237 "code": -32601, 00:05:08.237 "message": "Method not found" 00:05:08.237 } 00:05:08.237 19:45:59 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:08.237 19:45:59 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.237 19:45:59 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:08.237 19:45:59 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.237 19:45:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 58687 00:05:08.237 19:45:59 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 58687 ']' 00:05:08.237 19:45:59 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 58687 00:05:08.237 19:45:59 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:08.237 19:45:59 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.237 19:45:59 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58687 00:05:08.237 killing process with pid 58687 00:05:08.237 19:45:59 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.237 19:45:59 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.237 19:45:59 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58687' 00:05:08.237 19:45:59 app_cmdline -- common/autotest_common.sh@973 -- # kill 58687 00:05:08.237 19:45:59 app_cmdline -- common/autotest_common.sh@978 -- # wait 58687 00:05:09.699 ************************************ 00:05:09.699 END TEST app_cmdline 00:05:09.699 ************************************ 00:05:09.699 00:05:09.699 real 0m2.817s 00:05:09.699 user 0m3.044s 00:05:09.699 sys 0m0.492s 00:05:09.699 19:46:00 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.699 19:46:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:09.699 19:46:00 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:09.699 19:46:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.699 19:46:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.699 19:46:00 -- common/autotest_common.sh@10 -- # set +x 00:05:09.699 ************************************ 00:05:09.699 START TEST version 00:05:09.699 ************************************ 00:05:09.699 19:46:00 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:09.699 * Looking for test storage... 00:05:09.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:09.699 19:46:00 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.699 19:46:00 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.699 19:46:00 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.699 19:46:00 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.699 19:46:00 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.699 19:46:00 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.699 19:46:00 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.699 19:46:00 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.699 19:46:00 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.699 19:46:00 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.699 19:46:00 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.699 19:46:00 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.699 19:46:00 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.699 19:46:00 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.699 19:46:00 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.699 19:46:00 version -- scripts/common.sh@344 -- # case "$op" in 00:05:09.699 19:46:00 version -- scripts/common.sh@345 -- # : 1 00:05:09.699 19:46:00 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.699 19:46:00 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.699 19:46:00 version -- scripts/common.sh@365 -- # decimal 1 00:05:09.699 19:46:00 version -- scripts/common.sh@353 -- # local d=1 00:05:09.699 19:46:00 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.699 19:46:00 version -- scripts/common.sh@355 -- # echo 1 00:05:09.699 19:46:00 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.699 19:46:00 version -- scripts/common.sh@366 -- # decimal 2 00:05:09.699 19:46:00 version -- scripts/common.sh@353 -- # local d=2 00:05:09.699 19:46:00 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.699 19:46:00 version -- scripts/common.sh@355 -- # echo 2 00:05:09.699 19:46:00 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.699 19:46:00 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.699 19:46:00 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.699 19:46:00 version -- scripts/common.sh@368 -- # return 0 00:05:09.699 19:46:00 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.699 19:46:00 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.699 --rc genhtml_branch_coverage=1 00:05:09.699 --rc genhtml_function_coverage=1 00:05:09.699 --rc genhtml_legend=1 00:05:09.699 --rc geninfo_all_blocks=1 00:05:09.699 --rc geninfo_unexecuted_blocks=1 00:05:09.699 00:05:09.699 ' 00:05:09.699 19:46:00 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.699 --rc genhtml_branch_coverage=1 00:05:09.699 --rc genhtml_function_coverage=1 00:05:09.699 --rc genhtml_legend=1 00:05:09.699 --rc geninfo_all_blocks=1 00:05:09.699 --rc geninfo_unexecuted_blocks=1 00:05:09.699 00:05:09.699 ' 00:05:09.699 19:46:00 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.699 --rc genhtml_branch_coverage=1 00:05:09.699 --rc genhtml_function_coverage=1 00:05:09.699 --rc genhtml_legend=1 00:05:09.699 --rc geninfo_all_blocks=1 00:05:09.699 --rc geninfo_unexecuted_blocks=1 00:05:09.699 00:05:09.699 ' 00:05:09.699 19:46:00 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.699 --rc genhtml_branch_coverage=1 00:05:09.699 --rc genhtml_function_coverage=1 00:05:09.699 --rc genhtml_legend=1 00:05:09.699 --rc geninfo_all_blocks=1 00:05:09.699 --rc geninfo_unexecuted_blocks=1 00:05:09.699 00:05:09.699 ' 00:05:09.699 19:46:00 version -- app/version.sh@17 -- # get_header_version major 00:05:09.699 19:46:00 version -- app/version.sh@14 -- # cut -f2 00:05:09.699 19:46:00 version -- app/version.sh@14 -- # tr -d '"' 00:05:09.699 19:46:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:09.958 19:46:00 version -- app/version.sh@17 -- # major=25 00:05:09.958 19:46:00 version -- app/version.sh@18 -- # get_header_version minor 00:05:09.958 19:46:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:09.958 19:46:00 version -- app/version.sh@14 -- # tr -d '"' 00:05:09.958 19:46:00 version -- app/version.sh@14 -- # cut -f2 00:05:09.958 19:46:00 version -- app/version.sh@18 -- # minor=1 00:05:09.958 19:46:00 version -- app/version.sh@19 -- # get_header_version patch 00:05:09.958 19:46:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:09.958 19:46:00 version -- app/version.sh@14 -- # cut -f2 00:05:09.958 19:46:00 version -- app/version.sh@14 -- # tr -d '"' 00:05:09.958 19:46:00 version -- app/version.sh@19 -- # patch=0 00:05:09.958 19:46:00 version -- app/version.sh@20 -- # get_header_version suffix 00:05:09.958 19:46:00 version -- app/version.sh@14 -- # cut -f2 00:05:09.958 19:46:00 version -- app/version.sh@14 -- # tr -d '"' 00:05:09.958 19:46:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:09.958 19:46:00 version -- app/version.sh@20 -- # suffix=-pre 00:05:09.958 19:46:00 version -- app/version.sh@22 -- # version=25.1 00:05:09.958 19:46:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:09.958 19:46:00 version -- app/version.sh@28 -- # version=25.1rc0 00:05:09.958 19:46:00 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:09.958 19:46:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:09.958 19:46:00 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:09.958 19:46:00 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:09.958 00:05:09.958 real 0m0.199s 00:05:09.958 user 0m0.114s 00:05:09.958 sys 0m0.109s 00:05:09.958 ************************************ 00:05:09.958 END TEST version 00:05:09.958 ************************************ 00:05:09.958 19:46:00 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.958 19:46:00 version -- common/autotest_common.sh@10 -- # set +x 00:05:09.958 19:46:00 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:09.958 19:46:00 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:05:09.958 19:46:00 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:05:09.958 19:46:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.958 19:46:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.958 19:46:00 -- common/autotest_common.sh@10 -- # set +x 00:05:09.958 ************************************ 00:05:09.958 START TEST bdev_raid 00:05:09.958 ************************************ 00:05:09.958 19:46:00 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:05:09.958 * Looking for test storage... 00:05:09.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:09.958 19:46:00 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.958 19:46:00 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.958 19:46:00 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.958 19:46:00 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@345 -- # : 1 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.958 19:46:00 bdev_raid -- scripts/common.sh@368 -- # return 0 00:05:09.958 19:46:00 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.958 19:46:00 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.958 --rc genhtml_branch_coverage=1 00:05:09.959 --rc genhtml_function_coverage=1 00:05:09.959 --rc genhtml_legend=1 00:05:09.959 --rc geninfo_all_blocks=1 00:05:09.959 --rc geninfo_unexecuted_blocks=1 00:05:09.959 00:05:09.959 ' 00:05:09.959 19:46:00 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.959 --rc genhtml_branch_coverage=1 00:05:09.959 --rc genhtml_function_coverage=1 00:05:09.959 --rc genhtml_legend=1 00:05:09.959 --rc geninfo_all_blocks=1 00:05:09.959 --rc geninfo_unexecuted_blocks=1 00:05:09.959 00:05:09.959 ' 00:05:09.959 19:46:00 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.959 --rc genhtml_branch_coverage=1 00:05:09.959 --rc genhtml_function_coverage=1 00:05:09.959 --rc genhtml_legend=1 00:05:09.959 --rc geninfo_all_blocks=1 00:05:09.959 --rc geninfo_unexecuted_blocks=1 00:05:09.959 00:05:09.959 ' 00:05:09.959 19:46:00 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.959 --rc genhtml_branch_coverage=1 00:05:09.959 --rc genhtml_function_coverage=1 00:05:09.959 --rc genhtml_legend=1 00:05:09.959 --rc geninfo_all_blocks=1 00:05:09.959 --rc geninfo_unexecuted_blocks=1 00:05:09.959 00:05:09.959 ' 00:05:09.959 19:46:00 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:09.959 19:46:00 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:05:09.959 19:46:00 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:05:09.959 19:46:00 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:05:09.959 19:46:00 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:05:09.959 19:46:00 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:05:09.959 19:46:00 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:05:09.959 19:46:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.959 19:46:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.959 19:46:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:09.959 ************************************ 00:05:09.959 START TEST raid1_resize_data_offset_test 00:05:09.959 ************************************ 00:05:09.959 19:46:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:05:09.959 19:46:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=58858 00:05:09.959 19:46:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 58858' 00:05:09.959 Process raid pid: 58858 00:05:09.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.959 19:46:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 58858 00:05:09.959 19:46:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 58858 ']' 00:05:09.959 19:46:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:09.959 19:46:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.959 19:46:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.959 19:46:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.959 19:46:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.959 19:46:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.217 [2024-11-26 19:46:00.948585] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:10.217 [2024-11-26 19:46:00.948726] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:10.217 [2024-11-26 19:46:01.113055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.475 [2024-11-26 19:46:01.244328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.475 [2024-11-26 19:46:01.402841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:10.475 [2024-11-26 19:46:01.402897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.041 malloc0 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.041 malloc1 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.041 null0 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.041 [2024-11-26 19:46:01.937199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:05:11.041 [2024-11-26 19:46:01.939193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:05:11.041 [2024-11-26 19:46:01.939245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:05:11.041 [2024-11-26 19:46:01.939399] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:11.041 [2024-11-26 19:46:01.939414] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:05:11.041 [2024-11-26 19:46:01.939701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:05:11.041 [2024-11-26 19:46:01.939847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:11.041 [2024-11-26 19:46:01.939859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:05:11.041 [2024-11-26 19:46:01.940002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.041 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.299 [2024-11-26 19:46:01.981238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:05:11.299 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.299 19:46:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:05:11.299 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.299 19:46:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.558 malloc2 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.558 [2024-11-26 19:46:02.380613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:05:11.558 [2024-11-26 19:46:02.393316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.558 [2024-11-26 19:46:02.395427] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 58858 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 58858 ']' 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 58858 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58858 00:05:11.558 killing process with pid 58858 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58858' 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 58858 00:05:11.558 19:46:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 58858 00:05:11.558 [2024-11-26 19:46:02.456334] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:11.558 [2024-11-26 19:46:02.457444] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:05:11.558 [2024-11-26 19:46:02.457502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:11.558 [2024-11-26 19:46:02.457519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:05:11.558 [2024-11-26 19:46:02.482359] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:11.558 [2024-11-26 19:46:02.482930] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:11.558 [2024-11-26 19:46:02.482956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:05:12.933 [2024-11-26 19:46:03.664360] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:13.871 ************************************ 00:05:13.871 END TEST raid1_resize_data_offset_test 00:05:13.871 ************************************ 00:05:13.871 19:46:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:05:13.871 00:05:13.871 real 0m3.566s 00:05:13.871 user 0m3.468s 00:05:13.871 sys 0m0.478s 00:05:13.871 19:46:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.871 19:46:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.871 19:46:04 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:05:13.871 19:46:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:13.871 19:46:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.871 19:46:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:13.871 ************************************ 00:05:13.871 START TEST raid0_resize_superblock_test 00:05:13.871 ************************************ 00:05:13.871 19:46:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:05:13.871 19:46:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:05:13.871 19:46:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=58931 00:05:13.871 Process raid pid: 58931 00:05:13.871 19:46:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 58931' 00:05:13.871 19:46:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 58931 00:05:13.871 19:46:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 58931 ']' 00:05:13.871 19:46:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.871 19:46:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.871 19:46:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:13.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.871 19:46:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.871 19:46:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.871 19:46:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.871 [2024-11-26 19:46:04.556848] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:13.871 [2024-11-26 19:46:04.556977] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:13.871 [2024-11-26 19:46:04.712999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.129 [2024-11-26 19:46:04.832170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.129 [2024-11-26 19:46:04.981799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:14.129 [2024-11-26 19:46:04.981855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:14.722 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.722 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:05:14.722 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:05:14.722 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.722 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.982 malloc0 00:05:14.982 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.982 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:05:14.982 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.982 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.982 [2024-11-26 19:46:05.816879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:05:14.982 [2024-11-26 19:46:05.816944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.982 [2024-11-26 19:46:05.816964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:05:14.983 [2024-11-26 19:46:05.816977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.983 [2024-11-26 19:46:05.819314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.983 [2024-11-26 19:46:05.819362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:05:14.983 pt0 00:05:14.983 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.983 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:05:14.983 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.983 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.983 8171a349-4c0e-40a6-a4a2-ab1d96064239 00:05:14.983 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.983 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:05:14.983 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.983 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.983 0439e710-cf1f-4d2b-8b46-c720df8699f7 00:05:14.983 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.983 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:05:14.983 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.983 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.242 9da256a0-b910-4634-bd9d-170102e91e82 00:05:15.242 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.242 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:05:15.242 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:05:15.242 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.242 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.242 [2024-11-26 19:46:05.921821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0439e710-cf1f-4d2b-8b46-c720df8699f7 is claimed 00:05:15.242 [2024-11-26 19:46:05.921918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9da256a0-b910-4634-bd9d-170102e91e82 is claimed 00:05:15.242 [2024-11-26 19:46:05.922052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:15.242 [2024-11-26 19:46:05.922067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:05:15.242 [2024-11-26 19:46:05.922352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:15.242 [2024-11-26 19:46:05.922535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:15.242 [2024-11-26 19:46:05.922544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:05:15.243 [2024-11-26 19:46:05.922695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 [2024-11-26 19:46:06.002121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:15.243 19:46:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 [2024-11-26 19:46:06.034091] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:15.243 [2024-11-26 19:46:06.034126] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0439e710-cf1f-4d2b-8b46-c720df8699f7' was resized: old size 131072, new size 204800 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 [2024-11-26 19:46:06.042014] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:15.243 [2024-11-26 19:46:06.042041] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9da256a0-b910-4634-bd9d-170102e91e82' was resized: old size 131072, new size 204800 00:05:15.243 [2024-11-26 19:46:06.042065] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 [2024-11-26 19:46:06.118140] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 [2024-11-26 19:46:06.157907] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:05:15.243 [2024-11-26 19:46:06.157985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:05:15.243 [2024-11-26 19:46:06.158000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:15.243 [2024-11-26 19:46:06.158015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:05:15.243 [2024-11-26 19:46:06.158138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:15.243 [2024-11-26 19:46:06.158173] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:15.243 [2024-11-26 19:46:06.158185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 [2024-11-26 19:46:06.165806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:05:15.243 [2024-11-26 19:46:06.165854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.243 [2024-11-26 19:46:06.165875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:05:15.243 [2024-11-26 19:46:06.165885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.243 [2024-11-26 19:46:06.168177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.243 [2024-11-26 19:46:06.168212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:05:15.243 [2024-11-26 19:46:06.169878] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0439e710-cf1f-4d2b-8b46-c720df8699f7 00:05:15.243 [2024-11-26 19:46:06.169947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0439e710-cf1f-4d2b-8b46-c720df8699f7 is claimed 00:05:15.243 [2024-11-26 19:46:06.170048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9da256a0-b910-4634-bd9d-170102e91e82 00:05:15.243 [2024-11-26 19:46:06.170066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9da256a0-b910-4634-bd9d-170102e91e82 is claimed 00:05:15.243 [2024-11-26 19:46:06.170219] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 9da256a0-b910-4634-bd9d-170102e91e82 (2) smaller than existing raid bdev Raid (3) 00:05:15.243 [2024-11-26 19:46:06.170243] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 0439e710-cf1f-4d2b-8b46-c720df8699f7: File exists 00:05:15.243 [2024-11-26 19:46:06.170280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:05:15.243 [2024-11-26 19:46:06.170292] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:05:15.243 [2024-11-26 19:46:06.170554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:05:15.243 [2024-11-26 19:46:06.170697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:05:15.243 [2024-11-26 19:46:06.170711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:05:15.243 [2024-11-26 19:46:06.170857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:15.243 pt0 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.243 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:05:15.501 [2024-11-26 19:46:06.190222] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 58931 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 58931 ']' 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 58931 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58931 00:05:15.501 killing process with pid 58931 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58931' 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 58931 00:05:15.501 [2024-11-26 19:46:06.254895] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:15.501 19:46:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 58931 00:05:15.501 [2024-11-26 19:46:06.254985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:15.501 [2024-11-26 19:46:06.255045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:15.501 [2024-11-26 19:46:06.255055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:05:16.433 [2024-11-26 19:46:07.070732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:17.000 ************************************ 00:05:17.000 END TEST raid0_resize_superblock_test 00:05:17.000 ************************************ 00:05:17.000 19:46:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:05:17.000 00:05:17.000 real 0m3.209s 00:05:17.000 user 0m3.384s 00:05:17.000 sys 0m0.455s 00:05:17.000 19:46:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.000 19:46:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.000 19:46:07 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:05:17.000 19:46:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:17.000 19:46:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.000 19:46:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:17.000 ************************************ 00:05:17.000 START TEST raid1_resize_superblock_test 00:05:17.000 ************************************ 00:05:17.000 19:46:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:05:17.000 19:46:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:05:17.000 19:46:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59013 00:05:17.000 Process raid pid: 59013 00:05:17.000 19:46:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59013' 00:05:17.000 19:46:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59013 00:05:17.000 19:46:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59013 ']' 00:05:17.000 19:46:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.000 19:46:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:17.000 19:46:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.000 19:46:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.000 19:46:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.000 19:46:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.000 [2024-11-26 19:46:07.808355] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:17.000 [2024-11-26 19:46:07.808516] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:17.257 [2024-11-26 19:46:07.976870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.258 [2024-11-26 19:46:08.121549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.515 [2024-11-26 19:46:08.272598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:17.515 [2024-11-26 19:46:08.272653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:17.830 19:46:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.830 19:46:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:05:17.830 19:46:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:05:17.830 19:46:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.830 19:46:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.398 malloc0 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.398 [2024-11-26 19:46:09.130787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:05:18.398 [2024-11-26 19:46:09.130856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.398 [2024-11-26 19:46:09.130883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:05:18.398 [2024-11-26 19:46:09.130897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.398 [2024-11-26 19:46:09.133276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.398 [2024-11-26 19:46:09.133316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:05:18.398 pt0 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.398 c95ef93d-3a1e-42df-bc1c-ddbea05c3c2a 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.398 2e7cd84b-1369-41f1-b2c5-9dda88c7bed3 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.398 9714fcd6-fa3e-450e-9f5e-e94d0fc1e89c 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.398 [2024-11-26 19:46:09.246765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2e7cd84b-1369-41f1-b2c5-9dda88c7bed3 is claimed 00:05:18.398 [2024-11-26 19:46:09.246866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9714fcd6-fa3e-450e-9f5e-e94d0fc1e89c is claimed 00:05:18.398 [2024-11-26 19:46:09.247014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:18.398 [2024-11-26 19:46:09.247037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:05:18.398 [2024-11-26 19:46:09.247310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:18.398 [2024-11-26 19:46:09.247511] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:18.398 [2024-11-26 19:46:09.247528] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:05:18.398 [2024-11-26 19:46:09.247681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.398 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.398 [2024-11-26 19:46:09.323112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.657 [2024-11-26 19:46:09.351048] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:18.657 [2024-11-26 19:46:09.351083] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2e7cd84b-1369-41f1-b2c5-9dda88c7bed3' was resized: old size 131072, new size 204800 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.657 [2024-11-26 19:46:09.358904] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:18.657 [2024-11-26 19:46:09.358931] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9714fcd6-fa3e-450e-9f5e-e94d0fc1e89c' was resized: old size 131072, new size 204800 00:05:18.657 [2024-11-26 19:46:09.358955] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.657 [2024-11-26 19:46:09.435083] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.657 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.657 [2024-11-26 19:46:09.466841] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:05:18.657 [2024-11-26 19:46:09.466921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:05:18.657 [2024-11-26 19:46:09.466949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:05:18.657 [2024-11-26 19:46:09.467128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:18.657 [2024-11-26 19:46:09.467334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:18.658 [2024-11-26 19:46:09.467423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:18.658 [2024-11-26 19:46:09.467437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.658 [2024-11-26 19:46:09.478769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:05:18.658 [2024-11-26 19:46:09.478825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.658 [2024-11-26 19:46:09.478845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:05:18.658 [2024-11-26 19:46:09.478860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.658 [2024-11-26 19:46:09.481191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.658 [2024-11-26 19:46:09.481229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:05:18.658 [2024-11-26 19:46:09.482863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2e7cd84b-1369-41f1-b2c5-9dda88c7bed3 00:05:18.658 [2024-11-26 19:46:09.482931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2e7cd84b-1369-41f1-b2c5-9dda88c7bed3 is claimed 00:05:18.658 [2024-11-26 19:46:09.483042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9714fcd6-fa3e-450e-9f5e-e94d0fc1e89c 00:05:18.658 [2024-11-26 19:46:09.483082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9714fcd6-fa3e-450e-9f5e-e94d0fc1e89c is claimed 00:05:18.658 [2024-11-26 19:46:09.483223] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 9714fcd6-fa3e-450e-9f5e-e94d0fc1e89c (2) smaller than existing raid bdev Raid (3) 00:05:18.658 [2024-11-26 19:46:09.483251] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 2e7cd84b-1369-41f1-b2c5-9dda88c7bed3: File exists 00:05:18.658 [2024-11-26 19:46:09.483288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:05:18.658 [2024-11-26 19:46:09.483298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:05:18.658 [2024-11-26 19:46:09.483556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:05:18.658 [2024-11-26 19:46:09.483712] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:05:18.658 [2024-11-26 19:46:09.483727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:05:18.658 pt0 00:05:18.658 [2024-11-26 19:46:09.483871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.658 [2024-11-26 19:46:09.499193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59013 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59013 ']' 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59013 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59013 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.658 killing process with pid 59013 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59013' 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59013 00:05:18.658 [2024-11-26 19:46:09.556664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:18.658 [2024-11-26 19:46:09.556772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:18.658 [2024-11-26 19:46:09.556839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:18.658 [2024-11-26 19:46:09.556848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:05:18.658 19:46:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59013 00:05:19.593 [2024-11-26 19:46:10.507265] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:20.528 19:46:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:05:20.528 00:05:20.528 real 0m3.546s 00:05:20.528 user 0m3.726s 00:05:20.528 sys 0m0.495s 00:05:20.528 19:46:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.528 19:46:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.528 ************************************ 00:05:20.528 END TEST raid1_resize_superblock_test 00:05:20.528 ************************************ 00:05:20.528 19:46:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:05:20.528 19:46:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:05:20.528 19:46:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:05:20.528 19:46:11 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:05:20.528 19:46:11 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:05:20.528 19:46:11 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:05:20.528 19:46:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:20.528 19:46:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.528 19:46:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:20.528 ************************************ 00:05:20.528 START TEST raid_function_test_raid0 00:05:20.528 ************************************ 00:05:20.528 19:46:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:05:20.528 19:46:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:05:20.528 19:46:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:05:20.528 19:46:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:05:20.528 19:46:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=59110 00:05:20.528 19:46:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59110' 00:05:20.528 Process raid pid: 59110 00:05:20.528 19:46:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:20.528 19:46:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 59110 00:05:20.528 19:46:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 59110 ']' 00:05:20.528 19:46:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.528 19:46:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.528 19:46:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.528 19:46:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.528 19:46:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:05:20.528 [2024-11-26 19:46:11.416576] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:20.528 [2024-11-26 19:46:11.416710] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:20.786 [2024-11-26 19:46:11.577253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.786 [2024-11-26 19:46:11.698322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.043 [2024-11-26 19:46:11.848900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:21.043 [2024-11-26 19:46:11.848956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:21.609 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.609 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:05:21.609 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:05:21.610 Base_1 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:05:21.610 Base_2 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:05:21.610 [2024-11-26 19:46:12.349844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:05:21.610 [2024-11-26 19:46:12.351824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:05:21.610 [2024-11-26 19:46:12.351899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:21.610 [2024-11-26 19:46:12.351911] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:05:21.610 [2024-11-26 19:46:12.352199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:21.610 [2024-11-26 19:46:12.352368] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:21.610 [2024-11-26 19:46:12.352383] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:05:21.610 [2024-11-26 19:46:12.352539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:05:21.610 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:05:21.867 [2024-11-26 19:46:12.573977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:05:21.867 /dev/nbd0 00:05:21.867 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.867 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.867 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:21.867 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:05:21.867 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.867 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.867 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:21.867 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:05:21.867 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.867 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.867 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:21.868 1+0 records in 00:05:21.868 1+0 records out 00:05:21.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334615 s, 12.2 MB/s 00:05:21.868 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:21.868 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:05:21.868 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:21.868 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.868 19:46:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:05:21.868 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.868 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:05:21.868 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:05:21.868 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:05:21.868 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.125 { 00:05:22.125 "nbd_device": "/dev/nbd0", 00:05:22.125 "bdev_name": "raid" 00:05:22.125 } 00:05:22.125 ]' 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.125 { 00:05:22.125 "nbd_device": "/dev/nbd0", 00:05:22.125 "bdev_name": "raid" 00:05:22.125 } 00:05:22.125 ]' 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:05:22.125 4096+0 records in 00:05:22.125 4096+0 records out 00:05:22.125 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0211047 s, 99.4 MB/s 00:05:22.125 19:46:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:05:22.383 4096+0 records in 00:05:22.383 4096+0 records out 00:05:22.383 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.257429 s, 8.1 MB/s 00:05:22.383 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:05:22.383 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:22.383 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:05:22.383 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:22.383 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:05:22.383 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:05:22.383 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:05:22.383 128+0 records in 00:05:22.383 128+0 records out 00:05:22.383 65536 bytes (66 kB, 64 KiB) copied, 0.000496201 s, 132 MB/s 00:05:22.383 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:05:22.383 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:05:22.383 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:22.383 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:05:22.384 2035+0 records in 00:05:22.384 2035+0 records out 00:05:22.384 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00723905 s, 144 MB/s 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:05:22.384 456+0 records in 00:05:22.384 456+0 records out 00:05:22.384 233472 bytes (233 kB, 228 KiB) copied, 0.000984769 s, 237 MB/s 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.384 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:05:22.643 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.643 [2024-11-26 19:46:13.452899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:22.643 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.643 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.643 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.643 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.643 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.643 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:05:22.643 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.643 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:05:22.643 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:05:22.643 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 59110 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 59110 ']' 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 59110 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59110 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.901 killing process with pid 59110 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59110' 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 59110 00:05:22.901 [2024-11-26 19:46:13.746209] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:22.901 [2024-11-26 19:46:13.746325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:22.901 [2024-11-26 19:46:13.746400] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:22.901 [2024-11-26 19:46:13.746422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:05:22.901 19:46:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 59110 00:05:23.159 [2024-11-26 19:46:13.882222] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:23.724 19:46:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:05:23.724 00:05:23.724 real 0m3.196s 00:05:23.724 user 0m3.807s 00:05:23.724 sys 0m0.773s 00:05:23.724 19:46:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.724 19:46:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:05:23.724 ************************************ 00:05:23.724 END TEST raid_function_test_raid0 00:05:23.724 ************************************ 00:05:23.725 19:46:14 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:05:23.725 19:46:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:23.725 19:46:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.725 19:46:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:23.725 ************************************ 00:05:23.725 START TEST raid_function_test_concat 00:05:23.725 ************************************ 00:05:23.725 19:46:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:05:23.725 19:46:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:05:23.725 19:46:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:05:23.725 19:46:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:05:23.725 19:46:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=59228 00:05:23.725 Process raid pid: 59228 00:05:23.725 19:46:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59228' 00:05:23.725 19:46:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 59228 00:05:23.725 19:46:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 59228 ']' 00:05:23.725 19:46:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:23.725 19:46:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.725 19:46:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.725 19:46:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.725 19:46:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.725 19:46:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:05:23.725 [2024-11-26 19:46:14.655521] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:23.725 [2024-11-26 19:46:14.655665] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:23.984 [2024-11-26 19:46:14.813995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.984 [2024-11-26 19:46:14.916300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.241 [2024-11-26 19:46:15.038598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:24.241 [2024-11-26 19:46:15.038652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:05:24.872 Base_1 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:05:24.872 Base_2 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:05:24.872 [2024-11-26 19:46:15.564864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:05:24.872 [2024-11-26 19:46:15.566470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:05:24.872 [2024-11-26 19:46:15.566533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:24.872 [2024-11-26 19:46:15.566543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:05:24.872 [2024-11-26 19:46:15.566779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:24.872 [2024-11-26 19:46:15.566908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:24.872 [2024-11-26 19:46:15.566920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:05:24.872 [2024-11-26 19:46:15.567057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:05:24.872 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:05:24.872 [2024-11-26 19:46:15.800980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:05:25.131 /dev/nbd0 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:25.131 1+0 records in 00:05:25.131 1+0 records out 00:05:25.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339892 s, 12.1 MB/s 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:05:25.131 19:46:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.390 { 00:05:25.390 "nbd_device": "/dev/nbd0", 00:05:25.390 "bdev_name": "raid" 00:05:25.390 } 00:05:25.390 ]' 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.390 { 00:05:25.390 "nbd_device": "/dev/nbd0", 00:05:25.390 "bdev_name": "raid" 00:05:25.390 } 00:05:25.390 ]' 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:05:25.390 4096+0 records in 00:05:25.390 4096+0 records out 00:05:25.390 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0208687 s, 100 MB/s 00:05:25.390 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:05:25.648 4096+0 records in 00:05:25.648 4096+0 records out 00:05:25.648 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.204358 s, 10.3 MB/s 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:05:25.648 128+0 records in 00:05:25.648 128+0 records out 00:05:25.648 65536 bytes (66 kB, 64 KiB) copied, 0.000661521 s, 99.1 MB/s 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:05:25.648 2035+0 records in 00:05:25.648 2035+0 records out 00:05:25.648 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00946303 s, 110 MB/s 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:05:25.648 456+0 records in 00:05:25.648 456+0 records out 00:05:25.648 233472 bytes (233 kB, 228 KiB) copied, 0.00147005 s, 159 MB/s 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.648 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:05:25.905 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.905 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.905 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.905 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.905 [2024-11-26 19:46:16.694543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:25.905 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.905 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.905 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:05:25.905 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.905 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:05:25.905 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:05:25.905 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 59228 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 59228 ']' 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 59228 00:05:26.162 19:46:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:05:26.162 19:46:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.162 19:46:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59228 00:05:26.162 19:46:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.162 19:46:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.162 killing process with pid 59228 00:05:26.162 19:46:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59228' 00:05:26.162 19:46:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 59228 00:05:26.162 [2024-11-26 19:46:17.026835] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:26.162 [2024-11-26 19:46:17.026953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:26.162 [2024-11-26 19:46:17.027028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:26.162 [2024-11-26 19:46:17.027042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:05:26.162 19:46:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 59228 00:05:26.420 [2024-11-26 19:46:17.161463] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:26.985 19:46:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:05:26.985 00:05:26.985 real 0m3.332s 00:05:26.985 user 0m4.054s 00:05:26.985 sys 0m0.781s 00:05:26.985 19:46:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.985 19:46:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:05:26.985 ************************************ 00:05:26.985 END TEST raid_function_test_concat 00:05:26.985 ************************************ 00:05:27.243 19:46:17 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:05:27.243 19:46:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:27.243 19:46:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.243 19:46:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:27.243 ************************************ 00:05:27.243 START TEST raid0_resize_test 00:05:27.243 ************************************ 00:05:27.243 19:46:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:05:27.243 19:46:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:05:27.243 19:46:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59344 00:05:27.244 Process raid pid: 59344 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59344' 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59344 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 59344 ']' 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:27.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.244 19:46:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.244 [2024-11-26 19:46:18.029975] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:27.244 [2024-11-26 19:46:18.030097] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:27.501 [2024-11-26 19:46:18.198141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.501 [2024-11-26 19:46:18.315856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.759 [2024-11-26 19:46:18.465378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:27.759 [2024-11-26 19:46:18.465427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:28.017 Base_1 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:28.017 Base_2 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:28.017 [2024-11-26 19:46:18.906591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:05:28.017 [2024-11-26 19:46:18.908608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:05:28.017 [2024-11-26 19:46:18.908675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:28.017 [2024-11-26 19:46:18.908687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:05:28.017 [2024-11-26 19:46:18.908982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:05:28.017 [2024-11-26 19:46:18.909109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:28.017 [2024-11-26 19:46:18.909125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:05:28.017 [2024-11-26 19:46:18.909284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:28.017 [2024-11-26 19:46:18.914567] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:28.017 [2024-11-26 19:46:18.914594] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:05:28.017 true 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:28.017 [2024-11-26 19:46:18.926759] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:28.017 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:28.275 [2024-11-26 19:46:18.962606] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:28.275 [2024-11-26 19:46:18.962647] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:05:28.275 [2024-11-26 19:46:18.962682] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:05:28.275 true 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:28.275 [2024-11-26 19:46:18.974777] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59344 00:05:28.275 19:46:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 59344 ']' 00:05:28.275 19:46:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 59344 00:05:28.275 19:46:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:05:28.275 19:46:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.275 19:46:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59344 00:05:28.275 19:46:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.275 19:46:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.275 killing process with pid 59344 00:05:28.275 19:46:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59344' 00:05:28.275 19:46:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 59344 00:05:28.275 [2024-11-26 19:46:19.030498] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:28.275 [2024-11-26 19:46:19.030613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:28.275 [2024-11-26 19:46:19.030668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:28.275 [2024-11-26 19:46:19.030679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:05:28.275 19:46:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 59344 00:05:28.275 [2024-11-26 19:46:19.042527] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:29.224 19:46:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:05:29.224 00:05:29.224 real 0m1.847s 00:05:29.224 user 0m1.968s 00:05:29.224 sys 0m0.296s 00:05:29.224 19:46:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.224 19:46:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.224 ************************************ 00:05:29.224 END TEST raid0_resize_test 00:05:29.224 ************************************ 00:05:29.224 19:46:19 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:05:29.224 19:46:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:29.224 19:46:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.224 19:46:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:29.224 ************************************ 00:05:29.224 START TEST raid1_resize_test 00:05:29.224 ************************************ 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59400 00:05:29.224 Process raid pid: 59400 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59400' 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59400 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 59400 ']' 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.224 19:46:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.224 [2024-11-26 19:46:19.919492] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:29.224 [2024-11-26 19:46:19.919616] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:29.224 [2024-11-26 19:46:20.077603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.482 [2024-11-26 19:46:20.200181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.482 [2024-11-26 19:46:20.351242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:29.482 [2024-11-26 19:46:20.351298] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.048 Base_1 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.048 Base_2 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.048 [2024-11-26 19:46:20.801713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:05:30.048 [2024-11-26 19:46:20.803677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:05:30.048 [2024-11-26 19:46:20.803748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:30.048 [2024-11-26 19:46:20.803760] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:05:30.048 [2024-11-26 19:46:20.804053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:05:30.048 [2024-11-26 19:46:20.804186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:30.048 [2024-11-26 19:46:20.804201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:05:30.048 [2024-11-26 19:46:20.804375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.048 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.049 [2024-11-26 19:46:20.809699] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:30.049 [2024-11-26 19:46:20.809731] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:05:30.049 true 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.049 [2024-11-26 19:46:20.821903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.049 [2024-11-26 19:46:20.849761] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:30.049 [2024-11-26 19:46:20.849801] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:05:30.049 [2024-11-26 19:46:20.849834] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:05:30.049 true 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.049 [2024-11-26 19:46:20.861928] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59400 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 59400 ']' 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 59400 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59400 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.049 killing process with pid 59400 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59400' 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 59400 00:05:30.049 [2024-11-26 19:46:20.923532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:30.049 [2024-11-26 19:46:20.923633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:30.049 19:46:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 59400 00:05:30.049 [2024-11-26 19:46:20.924119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:30.049 [2024-11-26 19:46:20.924143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:05:30.049 [2024-11-26 19:46:20.936033] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:30.983 19:46:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:05:30.983 00:05:30.983 real 0m1.856s 00:05:30.983 user 0m1.999s 00:05:30.983 sys 0m0.272s 00:05:30.983 19:46:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.983 19:46:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.983 ************************************ 00:05:30.983 END TEST raid1_resize_test 00:05:30.983 ************************************ 00:05:30.983 19:46:21 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:05:30.983 19:46:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:05:30.983 19:46:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:05:30.983 19:46:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:30.983 19:46:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.983 19:46:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:30.983 ************************************ 00:05:30.983 START TEST raid_state_function_test 00:05:30.983 ************************************ 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=59457 00:05:30.983 Process raid pid: 59457 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59457' 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 59457 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 59457 ']' 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.983 19:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.983 [2024-11-26 19:46:21.828299] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:30.983 [2024-11-26 19:46:21.828444] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:31.241 [2024-11-26 19:46:21.989298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.241 [2024-11-26 19:46:22.110051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.499 [2024-11-26 19:46:22.264159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:31.499 [2024-11-26 19:46:22.264214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.757 [2024-11-26 19:46:22.679559] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:05:31.757 [2024-11-26 19:46:22.679623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:05:31.757 [2024-11-26 19:46:22.679634] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:31.757 [2024-11-26 19:46:22.679644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.757 19:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.015 19:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.015 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:32.015 "name": "Existed_Raid", 00:05:32.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:32.015 "strip_size_kb": 64, 00:05:32.015 "state": "configuring", 00:05:32.015 "raid_level": "raid0", 00:05:32.015 "superblock": false, 00:05:32.015 "num_base_bdevs": 2, 00:05:32.015 "num_base_bdevs_discovered": 0, 00:05:32.015 "num_base_bdevs_operational": 2, 00:05:32.015 "base_bdevs_list": [ 00:05:32.015 { 00:05:32.015 "name": "BaseBdev1", 00:05:32.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:32.015 "is_configured": false, 00:05:32.015 "data_offset": 0, 00:05:32.015 "data_size": 0 00:05:32.015 }, 00:05:32.015 { 00:05:32.015 "name": "BaseBdev2", 00:05:32.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:32.015 "is_configured": false, 00:05:32.015 "data_offset": 0, 00:05:32.015 "data_size": 0 00:05:32.015 } 00:05:32.015 ] 00:05:32.015 }' 00:05:32.015 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:32.015 19:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.274 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:05:32.274 19:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.274 19:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.274 [2024-11-26 19:46:22.991607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:05:32.274 [2024-11-26 19:46:22.991665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:05:32.274 19:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.274 19:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:32.274 19:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.274 19:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.274 [2024-11-26 19:46:23.003618] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:05:32.274 [2024-11-26 19:46:23.003677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:05:32.274 [2024-11-26 19:46:23.003687] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:32.274 [2024-11-26 19:46:23.003700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.274 [2024-11-26 19:46:23.039638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:32.274 BaseBdev1 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.274 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.274 [ 00:05:32.274 { 00:05:32.274 "name": "BaseBdev1", 00:05:32.274 "aliases": [ 00:05:32.274 "8db0e188-5d3a-483d-bdf7-4f23e2075cee" 00:05:32.274 ], 00:05:32.274 "product_name": "Malloc disk", 00:05:32.274 "block_size": 512, 00:05:32.274 "num_blocks": 65536, 00:05:32.274 "uuid": "8db0e188-5d3a-483d-bdf7-4f23e2075cee", 00:05:32.274 "assigned_rate_limits": { 00:05:32.274 "rw_ios_per_sec": 0, 00:05:32.274 "rw_mbytes_per_sec": 0, 00:05:32.274 "r_mbytes_per_sec": 0, 00:05:32.274 "w_mbytes_per_sec": 0 00:05:32.274 }, 00:05:32.274 "claimed": true, 00:05:32.274 "claim_type": "exclusive_write", 00:05:32.274 "zoned": false, 00:05:32.274 "supported_io_types": { 00:05:32.274 "read": true, 00:05:32.274 "write": true, 00:05:32.274 "unmap": true, 00:05:32.274 "flush": true, 00:05:32.274 "reset": true, 00:05:32.275 "nvme_admin": false, 00:05:32.275 "nvme_io": false, 00:05:32.275 "nvme_io_md": false, 00:05:32.275 "write_zeroes": true, 00:05:32.275 "zcopy": true, 00:05:32.275 "get_zone_info": false, 00:05:32.275 "zone_management": false, 00:05:32.275 "zone_append": false, 00:05:32.275 "compare": false, 00:05:32.275 "compare_and_write": false, 00:05:32.275 "abort": true, 00:05:32.275 "seek_hole": false, 00:05:32.275 "seek_data": false, 00:05:32.275 "copy": true, 00:05:32.275 "nvme_iov_md": false 00:05:32.275 }, 00:05:32.275 "memory_domains": [ 00:05:32.275 { 00:05:32.275 "dma_device_id": "system", 00:05:32.275 "dma_device_type": 1 00:05:32.275 }, 00:05:32.275 { 00:05:32.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.275 "dma_device_type": 2 00:05:32.275 } 00:05:32.275 ], 00:05:32.275 "driver_specific": {} 00:05:32.275 } 00:05:32.275 ] 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:32.275 "name": "Existed_Raid", 00:05:32.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:32.275 "strip_size_kb": 64, 00:05:32.275 "state": "configuring", 00:05:32.275 "raid_level": "raid0", 00:05:32.275 "superblock": false, 00:05:32.275 "num_base_bdevs": 2, 00:05:32.275 "num_base_bdevs_discovered": 1, 00:05:32.275 "num_base_bdevs_operational": 2, 00:05:32.275 "base_bdevs_list": [ 00:05:32.275 { 00:05:32.275 "name": "BaseBdev1", 00:05:32.275 "uuid": "8db0e188-5d3a-483d-bdf7-4f23e2075cee", 00:05:32.275 "is_configured": true, 00:05:32.275 "data_offset": 0, 00:05:32.275 "data_size": 65536 00:05:32.275 }, 00:05:32.275 { 00:05:32.275 "name": "BaseBdev2", 00:05:32.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:32.275 "is_configured": false, 00:05:32.275 "data_offset": 0, 00:05:32.275 "data_size": 0 00:05:32.275 } 00:05:32.275 ] 00:05:32.275 }' 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:32.275 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.535 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:05:32.535 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.535 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.535 [2024-11-26 19:46:23.371759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:05:32.535 [2024-11-26 19:46:23.371818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:05:32.535 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.535 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:32.535 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.535 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.535 [2024-11-26 19:46:23.379811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:32.535 [2024-11-26 19:46:23.381919] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:32.535 [2024-11-26 19:46:23.381967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:32.536 "name": "Existed_Raid", 00:05:32.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:32.536 "strip_size_kb": 64, 00:05:32.536 "state": "configuring", 00:05:32.536 "raid_level": "raid0", 00:05:32.536 "superblock": false, 00:05:32.536 "num_base_bdevs": 2, 00:05:32.536 "num_base_bdevs_discovered": 1, 00:05:32.536 "num_base_bdevs_operational": 2, 00:05:32.536 "base_bdevs_list": [ 00:05:32.536 { 00:05:32.536 "name": "BaseBdev1", 00:05:32.536 "uuid": "8db0e188-5d3a-483d-bdf7-4f23e2075cee", 00:05:32.536 "is_configured": true, 00:05:32.536 "data_offset": 0, 00:05:32.536 "data_size": 65536 00:05:32.536 }, 00:05:32.536 { 00:05:32.536 "name": "BaseBdev2", 00:05:32.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:32.536 "is_configured": false, 00:05:32.536 "data_offset": 0, 00:05:32.536 "data_size": 0 00:05:32.536 } 00:05:32.536 ] 00:05:32.536 }' 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:32.536 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.797 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:05:32.797 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.797 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.797 [2024-11-26 19:46:23.722709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:05:32.797 [2024-11-26 19:46:23.722778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:05:32.797 [2024-11-26 19:46:23.722792] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:05:32.797 [2024-11-26 19:46:23.723164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:32.797 [2024-11-26 19:46:23.723422] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:05:32.797 [2024-11-26 19:46:23.723450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:05:32.797 [2024-11-26 19:46:23.723790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:32.797 BaseBdev2 00:05:32.797 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.797 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:05:32.797 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:05:32.797 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:05:32.798 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:05:32.798 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:05:32.798 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:05:32.798 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:05:32.798 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.798 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.058 [ 00:05:33.058 { 00:05:33.058 "name": "BaseBdev2", 00:05:33.058 "aliases": [ 00:05:33.058 "b62084a7-5031-403a-9fc3-04e3fcf21767" 00:05:33.058 ], 00:05:33.058 "product_name": "Malloc disk", 00:05:33.058 "block_size": 512, 00:05:33.058 "num_blocks": 65536, 00:05:33.058 "uuid": "b62084a7-5031-403a-9fc3-04e3fcf21767", 00:05:33.058 "assigned_rate_limits": { 00:05:33.058 "rw_ios_per_sec": 0, 00:05:33.058 "rw_mbytes_per_sec": 0, 00:05:33.058 "r_mbytes_per_sec": 0, 00:05:33.058 "w_mbytes_per_sec": 0 00:05:33.058 }, 00:05:33.058 "claimed": true, 00:05:33.058 "claim_type": "exclusive_write", 00:05:33.058 "zoned": false, 00:05:33.058 "supported_io_types": { 00:05:33.058 "read": true, 00:05:33.058 "write": true, 00:05:33.058 "unmap": true, 00:05:33.058 "flush": true, 00:05:33.058 "reset": true, 00:05:33.058 "nvme_admin": false, 00:05:33.058 "nvme_io": false, 00:05:33.058 "nvme_io_md": false, 00:05:33.058 "write_zeroes": true, 00:05:33.058 "zcopy": true, 00:05:33.058 "get_zone_info": false, 00:05:33.058 "zone_management": false, 00:05:33.058 "zone_append": false, 00:05:33.058 "compare": false, 00:05:33.058 "compare_and_write": false, 00:05:33.058 "abort": true, 00:05:33.058 "seek_hole": false, 00:05:33.058 "seek_data": false, 00:05:33.058 "copy": true, 00:05:33.058 "nvme_iov_md": false 00:05:33.058 }, 00:05:33.058 "memory_domains": [ 00:05:33.058 { 00:05:33.058 "dma_device_id": "system", 00:05:33.058 "dma_device_type": 1 00:05:33.058 }, 00:05:33.058 { 00:05:33.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.058 "dma_device_type": 2 00:05:33.058 } 00:05:33.058 ], 00:05:33.058 "driver_specific": {} 00:05:33.058 } 00:05:33.058 ] 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:33.058 "name": "Existed_Raid", 00:05:33.058 "uuid": "5bc7adeb-8fdf-4041-941c-2ec563219283", 00:05:33.058 "strip_size_kb": 64, 00:05:33.058 "state": "online", 00:05:33.058 "raid_level": "raid0", 00:05:33.058 "superblock": false, 00:05:33.058 "num_base_bdevs": 2, 00:05:33.058 "num_base_bdevs_discovered": 2, 00:05:33.058 "num_base_bdevs_operational": 2, 00:05:33.058 "base_bdevs_list": [ 00:05:33.058 { 00:05:33.058 "name": "BaseBdev1", 00:05:33.058 "uuid": "8db0e188-5d3a-483d-bdf7-4f23e2075cee", 00:05:33.058 "is_configured": true, 00:05:33.058 "data_offset": 0, 00:05:33.058 "data_size": 65536 00:05:33.058 }, 00:05:33.058 { 00:05:33.058 "name": "BaseBdev2", 00:05:33.058 "uuid": "b62084a7-5031-403a-9fc3-04e3fcf21767", 00:05:33.058 "is_configured": true, 00:05:33.058 "data_offset": 0, 00:05:33.058 "data_size": 65536 00:05:33.058 } 00:05:33.058 ] 00:05:33.058 }' 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:33.058 19:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.316 [2024-11-26 19:46:24.071185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:05:33.316 "name": "Existed_Raid", 00:05:33.316 "aliases": [ 00:05:33.316 "5bc7adeb-8fdf-4041-941c-2ec563219283" 00:05:33.316 ], 00:05:33.316 "product_name": "Raid Volume", 00:05:33.316 "block_size": 512, 00:05:33.316 "num_blocks": 131072, 00:05:33.316 "uuid": "5bc7adeb-8fdf-4041-941c-2ec563219283", 00:05:33.316 "assigned_rate_limits": { 00:05:33.316 "rw_ios_per_sec": 0, 00:05:33.316 "rw_mbytes_per_sec": 0, 00:05:33.316 "r_mbytes_per_sec": 0, 00:05:33.316 "w_mbytes_per_sec": 0 00:05:33.316 }, 00:05:33.316 "claimed": false, 00:05:33.316 "zoned": false, 00:05:33.316 "supported_io_types": { 00:05:33.316 "read": true, 00:05:33.316 "write": true, 00:05:33.316 "unmap": true, 00:05:33.316 "flush": true, 00:05:33.316 "reset": true, 00:05:33.316 "nvme_admin": false, 00:05:33.316 "nvme_io": false, 00:05:33.316 "nvme_io_md": false, 00:05:33.316 "write_zeroes": true, 00:05:33.316 "zcopy": false, 00:05:33.316 "get_zone_info": false, 00:05:33.316 "zone_management": false, 00:05:33.316 "zone_append": false, 00:05:33.316 "compare": false, 00:05:33.316 "compare_and_write": false, 00:05:33.316 "abort": false, 00:05:33.316 "seek_hole": false, 00:05:33.316 "seek_data": false, 00:05:33.316 "copy": false, 00:05:33.316 "nvme_iov_md": false 00:05:33.316 }, 00:05:33.316 "memory_domains": [ 00:05:33.316 { 00:05:33.316 "dma_device_id": "system", 00:05:33.316 "dma_device_type": 1 00:05:33.316 }, 00:05:33.316 { 00:05:33.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.316 "dma_device_type": 2 00:05:33.316 }, 00:05:33.316 { 00:05:33.316 "dma_device_id": "system", 00:05:33.316 "dma_device_type": 1 00:05:33.316 }, 00:05:33.316 { 00:05:33.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.316 "dma_device_type": 2 00:05:33.316 } 00:05:33.316 ], 00:05:33.316 "driver_specific": { 00:05:33.316 "raid": { 00:05:33.316 "uuid": "5bc7adeb-8fdf-4041-941c-2ec563219283", 00:05:33.316 "strip_size_kb": 64, 00:05:33.316 "state": "online", 00:05:33.316 "raid_level": "raid0", 00:05:33.316 "superblock": false, 00:05:33.316 "num_base_bdevs": 2, 00:05:33.316 "num_base_bdevs_discovered": 2, 00:05:33.316 "num_base_bdevs_operational": 2, 00:05:33.316 "base_bdevs_list": [ 00:05:33.316 { 00:05:33.316 "name": "BaseBdev1", 00:05:33.316 "uuid": "8db0e188-5d3a-483d-bdf7-4f23e2075cee", 00:05:33.316 "is_configured": true, 00:05:33.316 "data_offset": 0, 00:05:33.316 "data_size": 65536 00:05:33.316 }, 00:05:33.316 { 00:05:33.316 "name": "BaseBdev2", 00:05:33.316 "uuid": "b62084a7-5031-403a-9fc3-04e3fcf21767", 00:05:33.316 "is_configured": true, 00:05:33.316 "data_offset": 0, 00:05:33.316 "data_size": 65536 00:05:33.316 } 00:05:33.316 ] 00:05:33.316 } 00:05:33.316 } 00:05:33.316 }' 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:05:33.316 BaseBdev2' 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.316 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.316 [2024-11-26 19:46:24.234930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:05:33.316 [2024-11-26 19:46:24.234986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:33.316 [2024-11-26 19:46:24.235046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.574 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:33.574 "name": "Existed_Raid", 00:05:33.574 "uuid": "5bc7adeb-8fdf-4041-941c-2ec563219283", 00:05:33.574 "strip_size_kb": 64, 00:05:33.574 "state": "offline", 00:05:33.574 "raid_level": "raid0", 00:05:33.574 "superblock": false, 00:05:33.574 "num_base_bdevs": 2, 00:05:33.574 "num_base_bdevs_discovered": 1, 00:05:33.574 "num_base_bdevs_operational": 1, 00:05:33.575 "base_bdevs_list": [ 00:05:33.575 { 00:05:33.575 "name": null, 00:05:33.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:33.575 "is_configured": false, 00:05:33.575 "data_offset": 0, 00:05:33.575 "data_size": 65536 00:05:33.575 }, 00:05:33.575 { 00:05:33.575 "name": "BaseBdev2", 00:05:33.575 "uuid": "b62084a7-5031-403a-9fc3-04e3fcf21767", 00:05:33.575 "is_configured": true, 00:05:33.575 "data_offset": 0, 00:05:33.575 "data_size": 65536 00:05:33.575 } 00:05:33.575 ] 00:05:33.575 }' 00:05:33.575 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:33.575 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.833 [2024-11-26 19:46:24.640928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:05:33.833 [2024-11-26 19:46:24.640990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 59457 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 59457 ']' 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 59457 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59457 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.833 killing process with pid 59457 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59457' 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 59457 00:05:33.833 [2024-11-26 19:46:24.751268] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:33.833 19:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 59457 00:05:33.834 [2024-11-26 19:46:24.760161] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:05:34.771 00:05:34.771 real 0m3.618s 00:05:34.771 user 0m5.143s 00:05:34.771 sys 0m0.634s 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:34.771 ************************************ 00:05:34.771 END TEST raid_state_function_test 00:05:34.771 ************************************ 00:05:34.771 19:46:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:05:34.771 19:46:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:34.771 19:46:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.771 19:46:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:34.771 ************************************ 00:05:34.771 START TEST raid_state_function_test_sb 00:05:34.771 ************************************ 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=59693 00:05:34.771 Process raid pid: 59693 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59693' 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 59693 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 59693 ']' 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.771 19:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:34.771 [2024-11-26 19:46:25.496356] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:34.771 [2024-11-26 19:46:25.496496] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:34.771 [2024-11-26 19:46:25.658391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.031 [2024-11-26 19:46:25.779836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.031 [2024-11-26 19:46:25.930847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:35.031 [2024-11-26 19:46:25.930900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:35.600 [2024-11-26 19:46:26.355139] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:05:35.600 [2024-11-26 19:46:26.355198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:05:35.600 [2024-11-26 19:46:26.355215] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:35.600 [2024-11-26 19:46:26.355226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:35.600 "name": "Existed_Raid", 00:05:35.600 "uuid": "70ab2e7c-87e6-4a8d-b891-771a16a46f10", 00:05:35.600 "strip_size_kb": 64, 00:05:35.600 "state": "configuring", 00:05:35.600 "raid_level": "raid0", 00:05:35.600 "superblock": true, 00:05:35.600 "num_base_bdevs": 2, 00:05:35.600 "num_base_bdevs_discovered": 0, 00:05:35.600 "num_base_bdevs_operational": 2, 00:05:35.600 "base_bdevs_list": [ 00:05:35.600 { 00:05:35.600 "name": "BaseBdev1", 00:05:35.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:35.600 "is_configured": false, 00:05:35.600 "data_offset": 0, 00:05:35.600 "data_size": 0 00:05:35.600 }, 00:05:35.600 { 00:05:35.600 "name": "BaseBdev2", 00:05:35.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:35.600 "is_configured": false, 00:05:35.600 "data_offset": 0, 00:05:35.600 "data_size": 0 00:05:35.600 } 00:05:35.600 ] 00:05:35.600 }' 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:35.600 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:35.859 [2024-11-26 19:46:26.723167] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:05:35.859 [2024-11-26 19:46:26.723213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:35.859 [2024-11-26 19:46:26.731163] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:05:35.859 [2024-11-26 19:46:26.731207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:05:35.859 [2024-11-26 19:46:26.731217] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:35.859 [2024-11-26 19:46:26.731229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:35.859 [2024-11-26 19:46:26.765927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:35.859 BaseBdev1 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.859 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:35.859 [ 00:05:35.859 { 00:05:35.859 "name": "BaseBdev1", 00:05:35.859 "aliases": [ 00:05:35.859 "f70e4f9b-a97b-41c6-88c2-e71f9e41075c" 00:05:35.859 ], 00:05:35.859 "product_name": "Malloc disk", 00:05:35.859 "block_size": 512, 00:05:35.859 "num_blocks": 65536, 00:05:35.859 "uuid": "f70e4f9b-a97b-41c6-88c2-e71f9e41075c", 00:05:35.859 "assigned_rate_limits": { 00:05:35.859 "rw_ios_per_sec": 0, 00:05:35.859 "rw_mbytes_per_sec": 0, 00:05:35.859 "r_mbytes_per_sec": 0, 00:05:35.859 "w_mbytes_per_sec": 0 00:05:35.859 }, 00:05:35.859 "claimed": true, 00:05:35.859 "claim_type": "exclusive_write", 00:05:35.859 "zoned": false, 00:05:35.859 "supported_io_types": { 00:05:35.859 "read": true, 00:05:35.859 "write": true, 00:05:35.859 "unmap": true, 00:05:35.859 "flush": true, 00:05:35.859 "reset": true, 00:05:35.859 "nvme_admin": false, 00:05:35.859 "nvme_io": false, 00:05:35.859 "nvme_io_md": false, 00:05:35.859 "write_zeroes": true, 00:05:35.859 "zcopy": true, 00:05:35.859 "get_zone_info": false, 00:05:35.859 "zone_management": false, 00:05:35.859 "zone_append": false, 00:05:35.859 "compare": false, 00:05:35.859 "compare_and_write": false, 00:05:35.859 "abort": true, 00:05:35.859 "seek_hole": false, 00:05:35.859 "seek_data": false, 00:05:35.859 "copy": true, 00:05:35.859 "nvme_iov_md": false 00:05:35.859 }, 00:05:35.859 "memory_domains": [ 00:05:35.859 { 00:05:35.859 "dma_device_id": "system", 00:05:35.859 "dma_device_type": 1 00:05:35.859 }, 00:05:35.859 { 00:05:35.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.859 "dma_device_type": 2 00:05:35.859 } 00:05:35.859 ], 00:05:35.859 "driver_specific": {} 00:05:35.860 } 00:05:36.117 ] 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:36.117 "name": "Existed_Raid", 00:05:36.117 "uuid": "a0fe5358-d719-44b7-9d2f-527514511d84", 00:05:36.117 "strip_size_kb": 64, 00:05:36.117 "state": "configuring", 00:05:36.117 "raid_level": "raid0", 00:05:36.117 "superblock": true, 00:05:36.117 "num_base_bdevs": 2, 00:05:36.117 "num_base_bdevs_discovered": 1, 00:05:36.117 "num_base_bdevs_operational": 2, 00:05:36.117 "base_bdevs_list": [ 00:05:36.117 { 00:05:36.117 "name": "BaseBdev1", 00:05:36.117 "uuid": "f70e4f9b-a97b-41c6-88c2-e71f9e41075c", 00:05:36.117 "is_configured": true, 00:05:36.117 "data_offset": 2048, 00:05:36.117 "data_size": 63488 00:05:36.117 }, 00:05:36.117 { 00:05:36.117 "name": "BaseBdev2", 00:05:36.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:36.117 "is_configured": false, 00:05:36.117 "data_offset": 0, 00:05:36.117 "data_size": 0 00:05:36.117 } 00:05:36.117 ] 00:05:36.117 }' 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:36.117 19:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:36.375 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:05:36.375 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.375 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:36.375 [2024-11-26 19:46:27.126084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:05:36.375 [2024-11-26 19:46:27.126145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:05:36.375 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.375 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:36.375 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.375 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:36.375 [2024-11-26 19:46:27.134144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:36.375 [2024-11-26 19:46:27.136157] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:36.375 [2024-11-26 19:46:27.136206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:36.375 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.375 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:05:36.375 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:05:36.375 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:05:36.375 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:36.376 "name": "Existed_Raid", 00:05:36.376 "uuid": "22777c04-2c9a-4050-b9f2-0859bd9d1fa2", 00:05:36.376 "strip_size_kb": 64, 00:05:36.376 "state": "configuring", 00:05:36.376 "raid_level": "raid0", 00:05:36.376 "superblock": true, 00:05:36.376 "num_base_bdevs": 2, 00:05:36.376 "num_base_bdevs_discovered": 1, 00:05:36.376 "num_base_bdevs_operational": 2, 00:05:36.376 "base_bdevs_list": [ 00:05:36.376 { 00:05:36.376 "name": "BaseBdev1", 00:05:36.376 "uuid": "f70e4f9b-a97b-41c6-88c2-e71f9e41075c", 00:05:36.376 "is_configured": true, 00:05:36.376 "data_offset": 2048, 00:05:36.376 "data_size": 63488 00:05:36.376 }, 00:05:36.376 { 00:05:36.376 "name": "BaseBdev2", 00:05:36.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:36.376 "is_configured": false, 00:05:36.376 "data_offset": 0, 00:05:36.376 "data_size": 0 00:05:36.376 } 00:05:36.376 ] 00:05:36.376 }' 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:36.376 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:36.635 [2024-11-26 19:46:27.471032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:05:36.635 [2024-11-26 19:46:27.471268] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:05:36.635 [2024-11-26 19:46:27.471287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:05:36.635 BaseBdev2 00:05:36.635 [2024-11-26 19:46:27.471569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:36.635 [2024-11-26 19:46:27.471710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:05:36.635 [2024-11-26 19:46:27.471731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:05:36.635 [2024-11-26 19:46:27.471860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:36.635 [ 00:05:36.635 { 00:05:36.635 "name": "BaseBdev2", 00:05:36.635 "aliases": [ 00:05:36.635 "dd400631-ec04-4f1f-a41d-93b10d12464f" 00:05:36.635 ], 00:05:36.635 "product_name": "Malloc disk", 00:05:36.635 "block_size": 512, 00:05:36.635 "num_blocks": 65536, 00:05:36.635 "uuid": "dd400631-ec04-4f1f-a41d-93b10d12464f", 00:05:36.635 "assigned_rate_limits": { 00:05:36.635 "rw_ios_per_sec": 0, 00:05:36.635 "rw_mbytes_per_sec": 0, 00:05:36.635 "r_mbytes_per_sec": 0, 00:05:36.635 "w_mbytes_per_sec": 0 00:05:36.635 }, 00:05:36.635 "claimed": true, 00:05:36.635 "claim_type": "exclusive_write", 00:05:36.635 "zoned": false, 00:05:36.635 "supported_io_types": { 00:05:36.635 "read": true, 00:05:36.635 "write": true, 00:05:36.635 "unmap": true, 00:05:36.635 "flush": true, 00:05:36.635 "reset": true, 00:05:36.635 "nvme_admin": false, 00:05:36.635 "nvme_io": false, 00:05:36.635 "nvme_io_md": false, 00:05:36.635 "write_zeroes": true, 00:05:36.635 "zcopy": true, 00:05:36.635 "get_zone_info": false, 00:05:36.635 "zone_management": false, 00:05:36.635 "zone_append": false, 00:05:36.635 "compare": false, 00:05:36.635 "compare_and_write": false, 00:05:36.635 "abort": true, 00:05:36.635 "seek_hole": false, 00:05:36.635 "seek_data": false, 00:05:36.635 "copy": true, 00:05:36.635 "nvme_iov_md": false 00:05:36.635 }, 00:05:36.635 "memory_domains": [ 00:05:36.635 { 00:05:36.635 "dma_device_id": "system", 00:05:36.635 "dma_device_type": 1 00:05:36.635 }, 00:05:36.635 { 00:05:36.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.635 "dma_device_type": 2 00:05:36.635 } 00:05:36.635 ], 00:05:36.635 "driver_specific": {} 00:05:36.635 } 00:05:36.635 ] 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:36.635 "name": "Existed_Raid", 00:05:36.635 "uuid": "22777c04-2c9a-4050-b9f2-0859bd9d1fa2", 00:05:36.635 "strip_size_kb": 64, 00:05:36.635 "state": "online", 00:05:36.635 "raid_level": "raid0", 00:05:36.635 "superblock": true, 00:05:36.635 "num_base_bdevs": 2, 00:05:36.635 "num_base_bdevs_discovered": 2, 00:05:36.635 "num_base_bdevs_operational": 2, 00:05:36.635 "base_bdevs_list": [ 00:05:36.635 { 00:05:36.635 "name": "BaseBdev1", 00:05:36.635 "uuid": "f70e4f9b-a97b-41c6-88c2-e71f9e41075c", 00:05:36.635 "is_configured": true, 00:05:36.635 "data_offset": 2048, 00:05:36.635 "data_size": 63488 00:05:36.635 }, 00:05:36.635 { 00:05:36.635 "name": "BaseBdev2", 00:05:36.635 "uuid": "dd400631-ec04-4f1f-a41d-93b10d12464f", 00:05:36.635 "is_configured": true, 00:05:36.635 "data_offset": 2048, 00:05:36.635 "data_size": 63488 00:05:36.635 } 00:05:36.635 ] 00:05:36.635 }' 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:36.635 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:36.893 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:05:36.893 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:05:36.893 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:05:36.893 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:05:36.893 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:05:36.893 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:05:36.893 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:05:36.893 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:05:36.893 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.893 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:36.893 [2024-11-26 19:46:27.823518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:05:37.151 "name": "Existed_Raid", 00:05:37.151 "aliases": [ 00:05:37.151 "22777c04-2c9a-4050-b9f2-0859bd9d1fa2" 00:05:37.151 ], 00:05:37.151 "product_name": "Raid Volume", 00:05:37.151 "block_size": 512, 00:05:37.151 "num_blocks": 126976, 00:05:37.151 "uuid": "22777c04-2c9a-4050-b9f2-0859bd9d1fa2", 00:05:37.151 "assigned_rate_limits": { 00:05:37.151 "rw_ios_per_sec": 0, 00:05:37.151 "rw_mbytes_per_sec": 0, 00:05:37.151 "r_mbytes_per_sec": 0, 00:05:37.151 "w_mbytes_per_sec": 0 00:05:37.151 }, 00:05:37.151 "claimed": false, 00:05:37.151 "zoned": false, 00:05:37.151 "supported_io_types": { 00:05:37.151 "read": true, 00:05:37.151 "write": true, 00:05:37.151 "unmap": true, 00:05:37.151 "flush": true, 00:05:37.151 "reset": true, 00:05:37.151 "nvme_admin": false, 00:05:37.151 "nvme_io": false, 00:05:37.151 "nvme_io_md": false, 00:05:37.151 "write_zeroes": true, 00:05:37.151 "zcopy": false, 00:05:37.151 "get_zone_info": false, 00:05:37.151 "zone_management": false, 00:05:37.151 "zone_append": false, 00:05:37.151 "compare": false, 00:05:37.151 "compare_and_write": false, 00:05:37.151 "abort": false, 00:05:37.151 "seek_hole": false, 00:05:37.151 "seek_data": false, 00:05:37.151 "copy": false, 00:05:37.151 "nvme_iov_md": false 00:05:37.151 }, 00:05:37.151 "memory_domains": [ 00:05:37.151 { 00:05:37.151 "dma_device_id": "system", 00:05:37.151 "dma_device_type": 1 00:05:37.151 }, 00:05:37.151 { 00:05:37.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.151 "dma_device_type": 2 00:05:37.151 }, 00:05:37.151 { 00:05:37.151 "dma_device_id": "system", 00:05:37.151 "dma_device_type": 1 00:05:37.151 }, 00:05:37.151 { 00:05:37.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.151 "dma_device_type": 2 00:05:37.151 } 00:05:37.151 ], 00:05:37.151 "driver_specific": { 00:05:37.151 "raid": { 00:05:37.151 "uuid": "22777c04-2c9a-4050-b9f2-0859bd9d1fa2", 00:05:37.151 "strip_size_kb": 64, 00:05:37.151 "state": "online", 00:05:37.151 "raid_level": "raid0", 00:05:37.151 "superblock": true, 00:05:37.151 "num_base_bdevs": 2, 00:05:37.151 "num_base_bdevs_discovered": 2, 00:05:37.151 "num_base_bdevs_operational": 2, 00:05:37.151 "base_bdevs_list": [ 00:05:37.151 { 00:05:37.151 "name": "BaseBdev1", 00:05:37.151 "uuid": "f70e4f9b-a97b-41c6-88c2-e71f9e41075c", 00:05:37.151 "is_configured": true, 00:05:37.151 "data_offset": 2048, 00:05:37.151 "data_size": 63488 00:05:37.151 }, 00:05:37.151 { 00:05:37.151 "name": "BaseBdev2", 00:05:37.151 "uuid": "dd400631-ec04-4f1f-a41d-93b10d12464f", 00:05:37.151 "is_configured": true, 00:05:37.151 "data_offset": 2048, 00:05:37.151 "data_size": 63488 00:05:37.151 } 00:05:37.151 ] 00:05:37.151 } 00:05:37.151 } 00:05:37.151 }' 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:05:37.151 BaseBdev2' 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.151 19:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:37.151 [2024-11-26 19:46:27.971260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:05:37.151 [2024-11-26 19:46:27.971297] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:37.151 [2024-11-26 19:46:27.971365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.151 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:37.152 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:37.152 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.152 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:37.152 "name": "Existed_Raid", 00:05:37.152 "uuid": "22777c04-2c9a-4050-b9f2-0859bd9d1fa2", 00:05:37.152 "strip_size_kb": 64, 00:05:37.152 "state": "offline", 00:05:37.152 "raid_level": "raid0", 00:05:37.152 "superblock": true, 00:05:37.152 "num_base_bdevs": 2, 00:05:37.152 "num_base_bdevs_discovered": 1, 00:05:37.152 "num_base_bdevs_operational": 1, 00:05:37.152 "base_bdevs_list": [ 00:05:37.152 { 00:05:37.152 "name": null, 00:05:37.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:37.152 "is_configured": false, 00:05:37.152 "data_offset": 0, 00:05:37.152 "data_size": 63488 00:05:37.152 }, 00:05:37.152 { 00:05:37.152 "name": "BaseBdev2", 00:05:37.152 "uuid": "dd400631-ec04-4f1f-a41d-93b10d12464f", 00:05:37.152 "is_configured": true, 00:05:37.152 "data_offset": 2048, 00:05:37.152 "data_size": 63488 00:05:37.152 } 00:05:37.152 ] 00:05:37.152 }' 00:05:37.152 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:37.152 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:37.716 [2024-11-26 19:46:28.397431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:05:37.716 [2024-11-26 19:46:28.397492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 59693 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 59693 ']' 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 59693 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59693 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.716 killing process with pid 59693 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59693' 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 59693 00:05:37.716 [2024-11-26 19:46:28.525987] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:37.716 19:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 59693 00:05:37.716 [2024-11-26 19:46:28.537198] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:38.650 19:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:05:38.650 00:05:38.650 real 0m3.888s 00:05:38.650 user 0m5.552s 00:05:38.650 sys 0m0.625s 00:05:38.650 19:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.650 19:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:38.650 ************************************ 00:05:38.650 END TEST raid_state_function_test_sb 00:05:38.650 ************************************ 00:05:38.650 19:46:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:05:38.650 19:46:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:38.650 19:46:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.650 19:46:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:38.650 ************************************ 00:05:38.650 START TEST raid_superblock_test 00:05:38.650 ************************************ 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=59931 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 59931 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59931 ']' 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.650 19:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:05:38.650 [2024-11-26 19:46:29.409843] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:38.650 [2024-11-26 19:46:29.409971] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59931 ] 00:05:38.650 [2024-11-26 19:46:29.567121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.908 [2024-11-26 19:46:29.689192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.908 [2024-11-26 19:46:29.838365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:38.908 [2024-11-26 19:46:29.838434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.477 malloc1 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.477 [2024-11-26 19:46:30.278032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:05:39.477 [2024-11-26 19:46:30.278231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.477 [2024-11-26 19:46:30.278261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:05:39.477 [2024-11-26 19:46:30.278272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.477 [2024-11-26 19:46:30.280615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.477 [2024-11-26 19:46:30.280650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:05:39.477 pt1 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.477 malloc2 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.477 [2024-11-26 19:46:30.316417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:05:39.477 [2024-11-26 19:46:30.316609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.477 [2024-11-26 19:46:30.316640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:05:39.477 [2024-11-26 19:46:30.316649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.477 [2024-11-26 19:46:30.318856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.477 [2024-11-26 19:46:30.318889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:05:39.477 pt2 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.477 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.478 [2024-11-26 19:46:30.324473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:05:39.478 [2024-11-26 19:46:30.326425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:05:39.478 [2024-11-26 19:46:30.326582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:39.478 [2024-11-26 19:46:30.326593] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:05:39.478 [2024-11-26 19:46:30.326849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:39.478 [2024-11-26 19:46:30.326995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:39.478 [2024-11-26 19:46:30.327006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:05:39.478 [2024-11-26 19:46:30.327141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:39.478 "name": "raid_bdev1", 00:05:39.478 "uuid": "f891c7c8-02a2-4456-9e0b-cadb93c21328", 00:05:39.478 "strip_size_kb": 64, 00:05:39.478 "state": "online", 00:05:39.478 "raid_level": "raid0", 00:05:39.478 "superblock": true, 00:05:39.478 "num_base_bdevs": 2, 00:05:39.478 "num_base_bdevs_discovered": 2, 00:05:39.478 "num_base_bdevs_operational": 2, 00:05:39.478 "base_bdevs_list": [ 00:05:39.478 { 00:05:39.478 "name": "pt1", 00:05:39.478 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:39.478 "is_configured": true, 00:05:39.478 "data_offset": 2048, 00:05:39.478 "data_size": 63488 00:05:39.478 }, 00:05:39.478 { 00:05:39.478 "name": "pt2", 00:05:39.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:39.478 "is_configured": true, 00:05:39.478 "data_offset": 2048, 00:05:39.478 "data_size": 63488 00:05:39.478 } 00:05:39.478 ] 00:05:39.478 }' 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:39.478 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.736 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:05:39.736 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:05:39.736 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:05:39.736 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:05:39.736 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:05:39.736 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:05:39.736 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:05:39.736 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.736 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.736 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:05:39.736 [2024-11-26 19:46:30.652847] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:39.736 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:05:39.995 "name": "raid_bdev1", 00:05:39.995 "aliases": [ 00:05:39.995 "f891c7c8-02a2-4456-9e0b-cadb93c21328" 00:05:39.995 ], 00:05:39.995 "product_name": "Raid Volume", 00:05:39.995 "block_size": 512, 00:05:39.995 "num_blocks": 126976, 00:05:39.995 "uuid": "f891c7c8-02a2-4456-9e0b-cadb93c21328", 00:05:39.995 "assigned_rate_limits": { 00:05:39.995 "rw_ios_per_sec": 0, 00:05:39.995 "rw_mbytes_per_sec": 0, 00:05:39.995 "r_mbytes_per_sec": 0, 00:05:39.995 "w_mbytes_per_sec": 0 00:05:39.995 }, 00:05:39.995 "claimed": false, 00:05:39.995 "zoned": false, 00:05:39.995 "supported_io_types": { 00:05:39.995 "read": true, 00:05:39.995 "write": true, 00:05:39.995 "unmap": true, 00:05:39.995 "flush": true, 00:05:39.995 "reset": true, 00:05:39.995 "nvme_admin": false, 00:05:39.995 "nvme_io": false, 00:05:39.995 "nvme_io_md": false, 00:05:39.995 "write_zeroes": true, 00:05:39.995 "zcopy": false, 00:05:39.995 "get_zone_info": false, 00:05:39.995 "zone_management": false, 00:05:39.995 "zone_append": false, 00:05:39.995 "compare": false, 00:05:39.995 "compare_and_write": false, 00:05:39.995 "abort": false, 00:05:39.995 "seek_hole": false, 00:05:39.995 "seek_data": false, 00:05:39.995 "copy": false, 00:05:39.995 "nvme_iov_md": false 00:05:39.995 }, 00:05:39.995 "memory_domains": [ 00:05:39.995 { 00:05:39.995 "dma_device_id": "system", 00:05:39.995 "dma_device_type": 1 00:05:39.995 }, 00:05:39.995 { 00:05:39.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.995 "dma_device_type": 2 00:05:39.995 }, 00:05:39.995 { 00:05:39.995 "dma_device_id": "system", 00:05:39.995 "dma_device_type": 1 00:05:39.995 }, 00:05:39.995 { 00:05:39.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.995 "dma_device_type": 2 00:05:39.995 } 00:05:39.995 ], 00:05:39.995 "driver_specific": { 00:05:39.995 "raid": { 00:05:39.995 "uuid": "f891c7c8-02a2-4456-9e0b-cadb93c21328", 00:05:39.995 "strip_size_kb": 64, 00:05:39.995 "state": "online", 00:05:39.995 "raid_level": "raid0", 00:05:39.995 "superblock": true, 00:05:39.995 "num_base_bdevs": 2, 00:05:39.995 "num_base_bdevs_discovered": 2, 00:05:39.995 "num_base_bdevs_operational": 2, 00:05:39.995 "base_bdevs_list": [ 00:05:39.995 { 00:05:39.995 "name": "pt1", 00:05:39.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:39.995 "is_configured": true, 00:05:39.995 "data_offset": 2048, 00:05:39.995 "data_size": 63488 00:05:39.995 }, 00:05:39.995 { 00:05:39.995 "name": "pt2", 00:05:39.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:39.995 "is_configured": true, 00:05:39.995 "data_offset": 2048, 00:05:39.995 "data_size": 63488 00:05:39.995 } 00:05:39.995 ] 00:05:39.995 } 00:05:39.995 } 00:05:39.995 }' 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:05:39.995 pt2' 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.995 [2024-11-26 19:46:30.800861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f891c7c8-02a2-4456-9e0b-cadb93c21328 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f891c7c8-02a2-4456-9e0b-cadb93c21328 ']' 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.995 [2024-11-26 19:46:30.828552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:05:39.995 [2024-11-26 19:46:30.828658] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:39.995 [2024-11-26 19:46:30.828791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:39.995 [2024-11-26 19:46:30.828897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:39.995 [2024-11-26 19:46:30.828968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:05:39.995 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.996 [2024-11-26 19:46:30.920626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:05:39.996 [2024-11-26 19:46:30.922679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:05:39.996 [2024-11-26 19:46:30.922749] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:05:39.996 [2024-11-26 19:46:30.922804] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:05:39.996 [2024-11-26 19:46:30.922819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:05:39.996 [2024-11-26 19:46:30.922833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:05:39.996 request: 00:05:39.996 { 00:05:39.996 "name": "raid_bdev1", 00:05:39.996 "raid_level": "raid0", 00:05:39.996 "base_bdevs": [ 00:05:39.996 "malloc1", 00:05:39.996 "malloc2" 00:05:39.996 ], 00:05:39.996 "strip_size_kb": 64, 00:05:39.996 "superblock": false, 00:05:39.996 "method": "bdev_raid_create", 00:05:39.996 "req_id": 1 00:05:39.996 } 00:05:39.996 Got JSON-RPC error response 00:05:39.996 response: 00:05:39.996 { 00:05:39.996 "code": -17, 00:05:39.996 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:05:39.996 } 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.996 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.253 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:40.253 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:05:40.253 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.253 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.253 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.253 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:05:40.253 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:05:40.253 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.254 [2024-11-26 19:46:30.964598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:05:40.254 [2024-11-26 19:46:30.964756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.254 [2024-11-26 19:46:30.964794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:40.254 [2024-11-26 19:46:30.964960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.254 [2024-11-26 19:46:30.967416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.254 [2024-11-26 19:46:30.967524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:05:40.254 [2024-11-26 19:46:30.967716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:05:40.254 [2024-11-26 19:46:30.967833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:05:40.254 pt1 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.254 19:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.254 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:40.254 "name": "raid_bdev1", 00:05:40.254 "uuid": "f891c7c8-02a2-4456-9e0b-cadb93c21328", 00:05:40.254 "strip_size_kb": 64, 00:05:40.254 "state": "configuring", 00:05:40.254 "raid_level": "raid0", 00:05:40.254 "superblock": true, 00:05:40.254 "num_base_bdevs": 2, 00:05:40.254 "num_base_bdevs_discovered": 1, 00:05:40.254 "num_base_bdevs_operational": 2, 00:05:40.254 "base_bdevs_list": [ 00:05:40.254 { 00:05:40.254 "name": "pt1", 00:05:40.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:40.254 "is_configured": true, 00:05:40.254 "data_offset": 2048, 00:05:40.254 "data_size": 63488 00:05:40.254 }, 00:05:40.254 { 00:05:40.254 "name": null, 00:05:40.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:40.254 "is_configured": false, 00:05:40.254 "data_offset": 2048, 00:05:40.254 "data_size": 63488 00:05:40.254 } 00:05:40.254 ] 00:05:40.254 }' 00:05:40.254 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:40.254 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.512 [2024-11-26 19:46:31.284721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:05:40.512 [2024-11-26 19:46:31.284803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.512 [2024-11-26 19:46:31.284824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:05:40.512 [2024-11-26 19:46:31.284836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.512 [2024-11-26 19:46:31.285310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.512 [2024-11-26 19:46:31.285328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:05:40.512 [2024-11-26 19:46:31.285431] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:05:40.512 [2024-11-26 19:46:31.285460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:05:40.512 [2024-11-26 19:46:31.285572] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:05:40.512 [2024-11-26 19:46:31.285584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:05:40.512 [2024-11-26 19:46:31.285832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:05:40.512 [2024-11-26 19:46:31.285962] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:05:40.512 [2024-11-26 19:46:31.285970] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:05:40.512 [2024-11-26 19:46:31.286099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:40.512 pt2 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:40.512 "name": "raid_bdev1", 00:05:40.512 "uuid": "f891c7c8-02a2-4456-9e0b-cadb93c21328", 00:05:40.512 "strip_size_kb": 64, 00:05:40.512 "state": "online", 00:05:40.512 "raid_level": "raid0", 00:05:40.512 "superblock": true, 00:05:40.512 "num_base_bdevs": 2, 00:05:40.512 "num_base_bdevs_discovered": 2, 00:05:40.512 "num_base_bdevs_operational": 2, 00:05:40.512 "base_bdevs_list": [ 00:05:40.512 { 00:05:40.512 "name": "pt1", 00:05:40.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:40.512 "is_configured": true, 00:05:40.512 "data_offset": 2048, 00:05:40.512 "data_size": 63488 00:05:40.512 }, 00:05:40.512 { 00:05:40.512 "name": "pt2", 00:05:40.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:40.512 "is_configured": true, 00:05:40.512 "data_offset": 2048, 00:05:40.512 "data_size": 63488 00:05:40.512 } 00:05:40.512 ] 00:05:40.512 }' 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:40.512 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.770 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:05:40.770 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:05:40.770 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:05:40.770 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:05:40.770 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:05:40.770 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:05:40.770 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:05:40.770 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:05:40.770 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.770 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.770 [2024-11-26 19:46:31.613054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:40.770 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.770 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:05:40.770 "name": "raid_bdev1", 00:05:40.770 "aliases": [ 00:05:40.770 "f891c7c8-02a2-4456-9e0b-cadb93c21328" 00:05:40.770 ], 00:05:40.770 "product_name": "Raid Volume", 00:05:40.770 "block_size": 512, 00:05:40.770 "num_blocks": 126976, 00:05:40.770 "uuid": "f891c7c8-02a2-4456-9e0b-cadb93c21328", 00:05:40.770 "assigned_rate_limits": { 00:05:40.770 "rw_ios_per_sec": 0, 00:05:40.770 "rw_mbytes_per_sec": 0, 00:05:40.771 "r_mbytes_per_sec": 0, 00:05:40.771 "w_mbytes_per_sec": 0 00:05:40.771 }, 00:05:40.771 "claimed": false, 00:05:40.771 "zoned": false, 00:05:40.771 "supported_io_types": { 00:05:40.771 "read": true, 00:05:40.771 "write": true, 00:05:40.771 "unmap": true, 00:05:40.771 "flush": true, 00:05:40.771 "reset": true, 00:05:40.771 "nvme_admin": false, 00:05:40.771 "nvme_io": false, 00:05:40.771 "nvme_io_md": false, 00:05:40.771 "write_zeroes": true, 00:05:40.771 "zcopy": false, 00:05:40.771 "get_zone_info": false, 00:05:40.771 "zone_management": false, 00:05:40.771 "zone_append": false, 00:05:40.771 "compare": false, 00:05:40.771 "compare_and_write": false, 00:05:40.771 "abort": false, 00:05:40.771 "seek_hole": false, 00:05:40.771 "seek_data": false, 00:05:40.771 "copy": false, 00:05:40.771 "nvme_iov_md": false 00:05:40.771 }, 00:05:40.771 "memory_domains": [ 00:05:40.771 { 00:05:40.771 "dma_device_id": "system", 00:05:40.771 "dma_device_type": 1 00:05:40.771 }, 00:05:40.771 { 00:05:40.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.771 "dma_device_type": 2 00:05:40.771 }, 00:05:40.771 { 00:05:40.771 "dma_device_id": "system", 00:05:40.771 "dma_device_type": 1 00:05:40.771 }, 00:05:40.771 { 00:05:40.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.771 "dma_device_type": 2 00:05:40.771 } 00:05:40.771 ], 00:05:40.771 "driver_specific": { 00:05:40.771 "raid": { 00:05:40.771 "uuid": "f891c7c8-02a2-4456-9e0b-cadb93c21328", 00:05:40.771 "strip_size_kb": 64, 00:05:40.771 "state": "online", 00:05:40.771 "raid_level": "raid0", 00:05:40.771 "superblock": true, 00:05:40.771 "num_base_bdevs": 2, 00:05:40.771 "num_base_bdevs_discovered": 2, 00:05:40.771 "num_base_bdevs_operational": 2, 00:05:40.771 "base_bdevs_list": [ 00:05:40.771 { 00:05:40.771 "name": "pt1", 00:05:40.771 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:40.771 "is_configured": true, 00:05:40.771 "data_offset": 2048, 00:05:40.771 "data_size": 63488 00:05:40.771 }, 00:05:40.771 { 00:05:40.771 "name": "pt2", 00:05:40.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:40.771 "is_configured": true, 00:05:40.771 "data_offset": 2048, 00:05:40.771 "data_size": 63488 00:05:40.771 } 00:05:40.771 ] 00:05:40.771 } 00:05:40.771 } 00:05:40.771 }' 00:05:40.771 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:05:40.771 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:05:40.771 pt2' 00:05:40.771 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:40.771 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:05:40.771 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:40.771 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:05:40.771 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:40.771 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.771 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:05:41.029 [2024-11-26 19:46:31.777091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f891c7c8-02a2-4456-9e0b-cadb93c21328 '!=' f891c7c8-02a2-4456-9e0b-cadb93c21328 ']' 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 59931 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59931 ']' 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59931 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59931 00:05:41.029 killing process with pid 59931 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59931' 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 59931 00:05:41.029 19:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 59931 00:05:41.029 [2024-11-26 19:46:31.827928] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:41.029 [2024-11-26 19:46:31.828029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:41.029 [2024-11-26 19:46:31.828085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:41.029 [2024-11-26 19:46:31.828097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:05:41.287 [2024-11-26 19:46:31.964958] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:41.852 19:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:05:41.852 00:05:41.852 real 0m3.370s 00:05:41.852 user 0m4.659s 00:05:41.852 sys 0m0.561s 00:05:41.852 19:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.852 19:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.852 ************************************ 00:05:41.853 END TEST raid_superblock_test 00:05:41.853 ************************************ 00:05:41.853 19:46:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:05:41.853 19:46:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:41.853 19:46:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.853 19:46:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:41.853 ************************************ 00:05:41.853 START TEST raid_read_error_test 00:05:41.853 ************************************ 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:05:41.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rStIYFiRWI 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60137 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60137 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 60137 ']' 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.853 19:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.152 [2024-11-26 19:46:32.832971] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:42.152 [2024-11-26 19:46:32.833284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60137 ] 00:05:42.152 [2024-11-26 19:46:32.994055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.409 [2024-11-26 19:46:33.109953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.409 [2024-11-26 19:46:33.257050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:42.409 [2024-11-26 19:46:33.257116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:42.974 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.974 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:05:42.974 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:05:42.974 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:05:42.974 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.974 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.974 BaseBdev1_malloc 00:05:42.974 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.974 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.975 true 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.975 [2024-11-26 19:46:33.724501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:05:42.975 [2024-11-26 19:46:33.724566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.975 [2024-11-26 19:46:33.724585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:05:42.975 [2024-11-26 19:46:33.724597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.975 [2024-11-26 19:46:33.726832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.975 [2024-11-26 19:46:33.726870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:05:42.975 BaseBdev1 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.975 BaseBdev2_malloc 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.975 true 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.975 [2024-11-26 19:46:33.774650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:05:42.975 [2024-11-26 19:46:33.774699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.975 [2024-11-26 19:46:33.774716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:42.975 [2024-11-26 19:46:33.774729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.975 [2024-11-26 19:46:33.776984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.975 [2024-11-26 19:46:33.777019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:05:42.975 BaseBdev2 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.975 [2024-11-26 19:46:33.782714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:42.975 [2024-11-26 19:46:33.784705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:05:42.975 [2024-11-26 19:46:33.784895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:05:42.975 [2024-11-26 19:46:33.784911] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:05:42.975 [2024-11-26 19:46:33.785156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:05:42.975 [2024-11-26 19:46:33.785308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:05:42.975 [2024-11-26 19:46:33.785319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:05:42.975 [2024-11-26 19:46:33.785484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:42.975 "name": "raid_bdev1", 00:05:42.975 "uuid": "0368fa7c-d746-404a-bf34-09f3bef4dd3c", 00:05:42.975 "strip_size_kb": 64, 00:05:42.975 "state": "online", 00:05:42.975 "raid_level": "raid0", 00:05:42.975 "superblock": true, 00:05:42.975 "num_base_bdevs": 2, 00:05:42.975 "num_base_bdevs_discovered": 2, 00:05:42.975 "num_base_bdevs_operational": 2, 00:05:42.975 "base_bdevs_list": [ 00:05:42.975 { 00:05:42.975 "name": "BaseBdev1", 00:05:42.975 "uuid": "44c5641a-1f87-5f00-a5a6-1257eafd9562", 00:05:42.975 "is_configured": true, 00:05:42.975 "data_offset": 2048, 00:05:42.975 "data_size": 63488 00:05:42.975 }, 00:05:42.975 { 00:05:42.975 "name": "BaseBdev2", 00:05:42.975 "uuid": "3a4ee423-567f-57ca-9439-37b1f35bfe7d", 00:05:42.975 "is_configured": true, 00:05:42.975 "data_offset": 2048, 00:05:42.975 "data_size": 63488 00:05:42.975 } 00:05:42.975 ] 00:05:42.975 }' 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:42.975 19:46:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.232 19:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:05:43.232 19:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:05:43.490 [2024-11-26 19:46:34.235905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:44.429 "name": "raid_bdev1", 00:05:44.429 "uuid": "0368fa7c-d746-404a-bf34-09f3bef4dd3c", 00:05:44.429 "strip_size_kb": 64, 00:05:44.429 "state": "online", 00:05:44.429 "raid_level": "raid0", 00:05:44.429 "superblock": true, 00:05:44.429 "num_base_bdevs": 2, 00:05:44.429 "num_base_bdevs_discovered": 2, 00:05:44.429 "num_base_bdevs_operational": 2, 00:05:44.429 "base_bdevs_list": [ 00:05:44.429 { 00:05:44.429 "name": "BaseBdev1", 00:05:44.429 "uuid": "44c5641a-1f87-5f00-a5a6-1257eafd9562", 00:05:44.429 "is_configured": true, 00:05:44.429 "data_offset": 2048, 00:05:44.429 "data_size": 63488 00:05:44.429 }, 00:05:44.429 { 00:05:44.429 "name": "BaseBdev2", 00:05:44.429 "uuid": "3a4ee423-567f-57ca-9439-37b1f35bfe7d", 00:05:44.429 "is_configured": true, 00:05:44.429 "data_offset": 2048, 00:05:44.429 "data_size": 63488 00:05:44.429 } 00:05:44.429 ] 00:05:44.429 }' 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:44.429 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:44.686 [2024-11-26 19:46:35.473400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:05:44.686 [2024-11-26 19:46:35.473433] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:44.686 [2024-11-26 19:46:35.475894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:44.686 [2024-11-26 19:46:35.475940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:44.686 [2024-11-26 19:46:35.475971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:44.686 [2024-11-26 19:46:35.475981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:05:44.686 { 00:05:44.686 "results": [ 00:05:44.686 { 00:05:44.686 "job": "raid_bdev1", 00:05:44.686 "core_mask": "0x1", 00:05:44.686 "workload": "randrw", 00:05:44.686 "percentage": 50, 00:05:44.686 "status": "finished", 00:05:44.686 "queue_depth": 1, 00:05:44.686 "io_size": 131072, 00:05:44.686 "runtime": 1.235512, 00:05:44.686 "iops": 16530.798567719292, 00:05:44.686 "mibps": 2066.3498209649115, 00:05:44.686 "io_failed": 1, 00:05:44.686 "io_timeout": 0, 00:05:44.686 "avg_latency_us": 83.07564681291781, 00:05:44.686 "min_latency_us": 26.19076923076923, 00:05:44.686 "max_latency_us": 1405.2430769230768 00:05:44.686 } 00:05:44.686 ], 00:05:44.686 "core_count": 1 00:05:44.686 } 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60137 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 60137 ']' 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 60137 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60137 00:05:44.686 killing process with pid 60137 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60137' 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 60137 00:05:44.686 [2024-11-26 19:46:35.505965] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:44.686 19:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 60137 00:05:44.686 [2024-11-26 19:46:35.575819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:45.620 19:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rStIYFiRWI 00:05:45.620 19:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:05:45.620 19:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:05:45.620 19:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:05:45.620 19:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:05:45.620 19:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:05:45.620 19:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:05:45.620 ************************************ 00:05:45.620 END TEST raid_read_error_test 00:05:45.620 ************************************ 00:05:45.620 19:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:05:45.620 00:05:45.620 real 0m3.477s 00:05:45.620 user 0m4.202s 00:05:45.620 sys 0m0.400s 00:05:45.620 19:46:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.620 19:46:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.620 19:46:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:05:45.620 19:46:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:45.620 19:46:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.620 19:46:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:45.620 ************************************ 00:05:45.620 START TEST raid_write_error_test 00:05:45.620 ************************************ 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:05:45.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.F0DNP1VN7N 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60267 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60267 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 60267 ']' 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.620 19:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.620 [2024-11-26 19:46:36.363909] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:45.620 [2024-11-26 19:46:36.364045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60267 ] 00:05:45.621 [2024-11-26 19:46:36.518213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.879 [2024-11-26 19:46:36.628147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.879 [2024-11-26 19:46:36.757761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:45.879 [2024-11-26 19:46:36.757823] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.444 BaseBdev1_malloc 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.444 true 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.444 [2024-11-26 19:46:37.272660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:05:46.444 [2024-11-26 19:46:37.272717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.444 [2024-11-26 19:46:37.272736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:05:46.444 [2024-11-26 19:46:37.272746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.444 [2024-11-26 19:46:37.274747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.444 [2024-11-26 19:46:37.274781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:05:46.444 BaseBdev1 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.444 BaseBdev2_malloc 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.444 true 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.444 [2024-11-26 19:46:37.314924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:05:46.444 [2024-11-26 19:46:37.314989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.444 [2024-11-26 19:46:37.315005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:46.444 [2024-11-26 19:46:37.315015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.444 [2024-11-26 19:46:37.316937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.444 [2024-11-26 19:46:37.316972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:05:46.444 BaseBdev2 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.444 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.444 [2024-11-26 19:46:37.322999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:46.445 [2024-11-26 19:46:37.324671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:05:46.445 [2024-11-26 19:46:37.324843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:05:46.445 [2024-11-26 19:46:37.324858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:05:46.445 [2024-11-26 19:46:37.325086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:05:46.445 [2024-11-26 19:46:37.325220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:05:46.445 [2024-11-26 19:46:37.325229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:05:46.445 [2024-11-26 19:46:37.325375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:46.445 "name": "raid_bdev1", 00:05:46.445 "uuid": "5f8313fa-70f1-4bd3-8da9-b29780391d9c", 00:05:46.445 "strip_size_kb": 64, 00:05:46.445 "state": "online", 00:05:46.445 "raid_level": "raid0", 00:05:46.445 "superblock": true, 00:05:46.445 "num_base_bdevs": 2, 00:05:46.445 "num_base_bdevs_discovered": 2, 00:05:46.445 "num_base_bdevs_operational": 2, 00:05:46.445 "base_bdevs_list": [ 00:05:46.445 { 00:05:46.445 "name": "BaseBdev1", 00:05:46.445 "uuid": "29dcdc0f-f26c-5560-a42d-0be50a494fc0", 00:05:46.445 "is_configured": true, 00:05:46.445 "data_offset": 2048, 00:05:46.445 "data_size": 63488 00:05:46.445 }, 00:05:46.445 { 00:05:46.445 "name": "BaseBdev2", 00:05:46.445 "uuid": "c543d6fd-585a-5892-9ca8-d15a5bf0f3a5", 00:05:46.445 "is_configured": true, 00:05:46.445 "data_offset": 2048, 00:05:46.445 "data_size": 63488 00:05:46.445 } 00:05:46.445 ] 00:05:46.445 }' 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:46.445 19:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.703 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:05:46.703 19:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:05:46.961 [2024-11-26 19:46:37.727978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:47.896 "name": "raid_bdev1", 00:05:47.896 "uuid": "5f8313fa-70f1-4bd3-8da9-b29780391d9c", 00:05:47.896 "strip_size_kb": 64, 00:05:47.896 "state": "online", 00:05:47.896 "raid_level": "raid0", 00:05:47.896 "superblock": true, 00:05:47.896 "num_base_bdevs": 2, 00:05:47.896 "num_base_bdevs_discovered": 2, 00:05:47.896 "num_base_bdevs_operational": 2, 00:05:47.896 "base_bdevs_list": [ 00:05:47.896 { 00:05:47.896 "name": "BaseBdev1", 00:05:47.896 "uuid": "29dcdc0f-f26c-5560-a42d-0be50a494fc0", 00:05:47.896 "is_configured": true, 00:05:47.896 "data_offset": 2048, 00:05:47.896 "data_size": 63488 00:05:47.896 }, 00:05:47.896 { 00:05:47.896 "name": "BaseBdev2", 00:05:47.896 "uuid": "c543d6fd-585a-5892-9ca8-d15a5bf0f3a5", 00:05:47.896 "is_configured": true, 00:05:47.896 "data_offset": 2048, 00:05:47.896 "data_size": 63488 00:05:47.896 } 00:05:47.896 ] 00:05:47.896 }' 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:47.896 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.154 [2024-11-26 19:46:38.949827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:05:48.154 [2024-11-26 19:46:38.949866] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:48.154 [2024-11-26 19:46:38.953501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:48.154 [2024-11-26 19:46:38.953582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:48.154 [2024-11-26 19:46:38.953642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:48.154 [2024-11-26 19:46:38.953661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:05:48.154 { 00:05:48.154 "results": [ 00:05:48.154 { 00:05:48.154 "job": "raid_bdev1", 00:05:48.154 "core_mask": "0x1", 00:05:48.154 "workload": "randrw", 00:05:48.154 "percentage": 50, 00:05:48.154 "status": "finished", 00:05:48.154 "queue_depth": 1, 00:05:48.154 "io_size": 131072, 00:05:48.154 "runtime": 1.220006, 00:05:48.154 "iops": 15749.102873264557, 00:05:48.154 "mibps": 1968.6378591580697, 00:05:48.154 "io_failed": 1, 00:05:48.154 "io_timeout": 0, 00:05:48.154 "avg_latency_us": 87.59688288396485, 00:05:48.154 "min_latency_us": 26.584615384615386, 00:05:48.154 "max_latency_us": 1493.4646153846154 00:05:48.154 } 00:05:48.154 ], 00:05:48.154 "core_count": 1 00:05:48.154 } 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60267 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 60267 ']' 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 60267 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60267 00:05:48.154 killing process with pid 60267 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60267' 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 60267 00:05:48.154 [2024-11-26 19:46:38.980825] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:48.154 19:46:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 60267 00:05:48.154 [2024-11-26 19:46:39.069924] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:49.089 19:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.F0DNP1VN7N 00:05:49.089 19:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:05:49.090 19:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:05:49.090 ************************************ 00:05:49.090 END TEST raid_write_error_test 00:05:49.090 ************************************ 00:05:49.090 19:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.82 00:05:49.090 19:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:05:49.090 19:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:05:49.090 19:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:05:49.090 19:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.82 != \0\.\0\0 ]] 00:05:49.090 00:05:49.090 real 0m3.593s 00:05:49.090 user 0m4.263s 00:05:49.090 sys 0m0.418s 00:05:49.090 19:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.090 19:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.090 19:46:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:05:49.090 19:46:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:05:49.090 19:46:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:49.090 19:46:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.090 19:46:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:49.090 ************************************ 00:05:49.090 START TEST raid_state_function_test 00:05:49.090 ************************************ 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:05:49.090 Process raid pid: 60402 00:05:49.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60402 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60402' 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60402 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60402 ']' 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.090 19:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:49.090 [2024-11-26 19:46:39.992290] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:49.090 [2024-11-26 19:46:39.992429] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:49.347 [2024-11-26 19:46:40.155757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.347 [2024-11-26 19:46:40.276876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.604 [2024-11-26 19:46:40.428143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:49.604 [2024-11-26 19:46:40.428198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.170 [2024-11-26 19:46:40.911701] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:05:50.170 [2024-11-26 19:46:40.911773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:05:50.170 [2024-11-26 19:46:40.911786] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:50.170 [2024-11-26 19:46:40.911796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:50.170 "name": "Existed_Raid", 00:05:50.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:50.170 "strip_size_kb": 64, 00:05:50.170 "state": "configuring", 00:05:50.170 "raid_level": "concat", 00:05:50.170 "superblock": false, 00:05:50.170 "num_base_bdevs": 2, 00:05:50.170 "num_base_bdevs_discovered": 0, 00:05:50.170 "num_base_bdevs_operational": 2, 00:05:50.170 "base_bdevs_list": [ 00:05:50.170 { 00:05:50.170 "name": "BaseBdev1", 00:05:50.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:50.170 "is_configured": false, 00:05:50.170 "data_offset": 0, 00:05:50.170 "data_size": 0 00:05:50.170 }, 00:05:50.170 { 00:05:50.170 "name": "BaseBdev2", 00:05:50.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:50.170 "is_configured": false, 00:05:50.170 "data_offset": 0, 00:05:50.170 "data_size": 0 00:05:50.170 } 00:05:50.170 ] 00:05:50.170 }' 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:50.170 19:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.428 [2024-11-26 19:46:41.231716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:05:50.428 [2024-11-26 19:46:41.231767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.428 [2024-11-26 19:46:41.239689] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:05:50.428 [2024-11-26 19:46:41.239735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:05:50.428 [2024-11-26 19:46:41.239744] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:50.428 [2024-11-26 19:46:41.239755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.428 [2024-11-26 19:46:41.274868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:50.428 BaseBdev1 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.428 [ 00:05:50.428 { 00:05:50.428 "name": "BaseBdev1", 00:05:50.428 "aliases": [ 00:05:50.428 "49377ef4-877c-452b-b1db-2f2333d6a4d3" 00:05:50.428 ], 00:05:50.428 "product_name": "Malloc disk", 00:05:50.428 "block_size": 512, 00:05:50.428 "num_blocks": 65536, 00:05:50.428 "uuid": "49377ef4-877c-452b-b1db-2f2333d6a4d3", 00:05:50.428 "assigned_rate_limits": { 00:05:50.428 "rw_ios_per_sec": 0, 00:05:50.428 "rw_mbytes_per_sec": 0, 00:05:50.428 "r_mbytes_per_sec": 0, 00:05:50.428 "w_mbytes_per_sec": 0 00:05:50.428 }, 00:05:50.428 "claimed": true, 00:05:50.428 "claim_type": "exclusive_write", 00:05:50.428 "zoned": false, 00:05:50.428 "supported_io_types": { 00:05:50.428 "read": true, 00:05:50.428 "write": true, 00:05:50.428 "unmap": true, 00:05:50.428 "flush": true, 00:05:50.428 "reset": true, 00:05:50.428 "nvme_admin": false, 00:05:50.428 "nvme_io": false, 00:05:50.428 "nvme_io_md": false, 00:05:50.428 "write_zeroes": true, 00:05:50.428 "zcopy": true, 00:05:50.428 "get_zone_info": false, 00:05:50.428 "zone_management": false, 00:05:50.428 "zone_append": false, 00:05:50.428 "compare": false, 00:05:50.428 "compare_and_write": false, 00:05:50.428 "abort": true, 00:05:50.428 "seek_hole": false, 00:05:50.428 "seek_data": false, 00:05:50.428 "copy": true, 00:05:50.428 "nvme_iov_md": false 00:05:50.428 }, 00:05:50.428 "memory_domains": [ 00:05:50.428 { 00:05:50.428 "dma_device_id": "system", 00:05:50.428 "dma_device_type": 1 00:05:50.428 }, 00:05:50.428 { 00:05:50.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.428 "dma_device_type": 2 00:05:50.428 } 00:05:50.428 ], 00:05:50.428 "driver_specific": {} 00:05:50.428 } 00:05:50.428 ] 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:50.428 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:50.429 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:50.429 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:50.429 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:50.429 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:50.429 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:50.429 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:50.429 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.429 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.429 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.429 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:50.429 "name": "Existed_Raid", 00:05:50.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:50.429 "strip_size_kb": 64, 00:05:50.429 "state": "configuring", 00:05:50.429 "raid_level": "concat", 00:05:50.429 "superblock": false, 00:05:50.429 "num_base_bdevs": 2, 00:05:50.429 "num_base_bdevs_discovered": 1, 00:05:50.429 "num_base_bdevs_operational": 2, 00:05:50.429 "base_bdevs_list": [ 00:05:50.429 { 00:05:50.429 "name": "BaseBdev1", 00:05:50.429 "uuid": "49377ef4-877c-452b-b1db-2f2333d6a4d3", 00:05:50.429 "is_configured": true, 00:05:50.429 "data_offset": 0, 00:05:50.429 "data_size": 65536 00:05:50.429 }, 00:05:50.429 { 00:05:50.429 "name": "BaseBdev2", 00:05:50.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:50.429 "is_configured": false, 00:05:50.429 "data_offset": 0, 00:05:50.429 "data_size": 0 00:05:50.429 } 00:05:50.429 ] 00:05:50.429 }' 00:05:50.429 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:50.429 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.752 [2024-11-26 19:46:41.611043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:05:50.752 [2024-11-26 19:46:41.611110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.752 [2024-11-26 19:46:41.619103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:50.752 [2024-11-26 19:46:41.621129] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:50.752 [2024-11-26 19:46:41.621177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:50.752 "name": "Existed_Raid", 00:05:50.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:50.752 "strip_size_kb": 64, 00:05:50.752 "state": "configuring", 00:05:50.752 "raid_level": "concat", 00:05:50.752 "superblock": false, 00:05:50.752 "num_base_bdevs": 2, 00:05:50.752 "num_base_bdevs_discovered": 1, 00:05:50.752 "num_base_bdevs_operational": 2, 00:05:50.752 "base_bdevs_list": [ 00:05:50.752 { 00:05:50.752 "name": "BaseBdev1", 00:05:50.752 "uuid": "49377ef4-877c-452b-b1db-2f2333d6a4d3", 00:05:50.752 "is_configured": true, 00:05:50.752 "data_offset": 0, 00:05:50.752 "data_size": 65536 00:05:50.752 }, 00:05:50.752 { 00:05:50.752 "name": "BaseBdev2", 00:05:50.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:50.752 "is_configured": false, 00:05:50.752 "data_offset": 0, 00:05:50.752 "data_size": 0 00:05:50.752 } 00:05:50.752 ] 00:05:50.752 }' 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:50.752 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.011 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:05:51.011 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.011 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.270 [2024-11-26 19:46:41.956516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:05:51.270 [2024-11-26 19:46:41.956581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:05:51.270 [2024-11-26 19:46:41.956590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:05:51.270 [2024-11-26 19:46:41.956872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:51.270 [2024-11-26 19:46:41.957033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:05:51.270 [2024-11-26 19:46:41.957045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:05:51.270 [2024-11-26 19:46:41.957323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:51.270 BaseBdev2 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.270 [ 00:05:51.270 { 00:05:51.270 "name": "BaseBdev2", 00:05:51.270 "aliases": [ 00:05:51.270 "c3a21c1e-dd66-4dc7-b133-db9c074c4ff1" 00:05:51.270 ], 00:05:51.270 "product_name": "Malloc disk", 00:05:51.270 "block_size": 512, 00:05:51.270 "num_blocks": 65536, 00:05:51.270 "uuid": "c3a21c1e-dd66-4dc7-b133-db9c074c4ff1", 00:05:51.270 "assigned_rate_limits": { 00:05:51.270 "rw_ios_per_sec": 0, 00:05:51.270 "rw_mbytes_per_sec": 0, 00:05:51.270 "r_mbytes_per_sec": 0, 00:05:51.270 "w_mbytes_per_sec": 0 00:05:51.270 }, 00:05:51.270 "claimed": true, 00:05:51.270 "claim_type": "exclusive_write", 00:05:51.270 "zoned": false, 00:05:51.270 "supported_io_types": { 00:05:51.270 "read": true, 00:05:51.270 "write": true, 00:05:51.270 "unmap": true, 00:05:51.270 "flush": true, 00:05:51.270 "reset": true, 00:05:51.270 "nvme_admin": false, 00:05:51.270 "nvme_io": false, 00:05:51.270 "nvme_io_md": false, 00:05:51.270 "write_zeroes": true, 00:05:51.270 "zcopy": true, 00:05:51.270 "get_zone_info": false, 00:05:51.270 "zone_management": false, 00:05:51.270 "zone_append": false, 00:05:51.270 "compare": false, 00:05:51.270 "compare_and_write": false, 00:05:51.270 "abort": true, 00:05:51.270 "seek_hole": false, 00:05:51.270 "seek_data": false, 00:05:51.270 "copy": true, 00:05:51.270 "nvme_iov_md": false 00:05:51.270 }, 00:05:51.270 "memory_domains": [ 00:05:51.270 { 00:05:51.270 "dma_device_id": "system", 00:05:51.270 "dma_device_type": 1 00:05:51.270 }, 00:05:51.270 { 00:05:51.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.270 "dma_device_type": 2 00:05:51.270 } 00:05:51.270 ], 00:05:51.270 "driver_specific": {} 00:05:51.270 } 00:05:51.270 ] 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:51.270 19:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.270 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:51.270 "name": "Existed_Raid", 00:05:51.270 "uuid": "0389bbef-cf48-4a3f-93d7-b2cbb6a3f235", 00:05:51.270 "strip_size_kb": 64, 00:05:51.270 "state": "online", 00:05:51.270 "raid_level": "concat", 00:05:51.270 "superblock": false, 00:05:51.270 "num_base_bdevs": 2, 00:05:51.270 "num_base_bdevs_discovered": 2, 00:05:51.271 "num_base_bdevs_operational": 2, 00:05:51.271 "base_bdevs_list": [ 00:05:51.271 { 00:05:51.271 "name": "BaseBdev1", 00:05:51.271 "uuid": "49377ef4-877c-452b-b1db-2f2333d6a4d3", 00:05:51.271 "is_configured": true, 00:05:51.271 "data_offset": 0, 00:05:51.271 "data_size": 65536 00:05:51.271 }, 00:05:51.271 { 00:05:51.271 "name": "BaseBdev2", 00:05:51.271 "uuid": "c3a21c1e-dd66-4dc7-b133-db9c074c4ff1", 00:05:51.271 "is_configured": true, 00:05:51.271 "data_offset": 0, 00:05:51.271 "data_size": 65536 00:05:51.271 } 00:05:51.271 ] 00:05:51.271 }' 00:05:51.271 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:51.271 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.528 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.529 [2024-11-26 19:46:42.308979] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:05:51.529 "name": "Existed_Raid", 00:05:51.529 "aliases": [ 00:05:51.529 "0389bbef-cf48-4a3f-93d7-b2cbb6a3f235" 00:05:51.529 ], 00:05:51.529 "product_name": "Raid Volume", 00:05:51.529 "block_size": 512, 00:05:51.529 "num_blocks": 131072, 00:05:51.529 "uuid": "0389bbef-cf48-4a3f-93d7-b2cbb6a3f235", 00:05:51.529 "assigned_rate_limits": { 00:05:51.529 "rw_ios_per_sec": 0, 00:05:51.529 "rw_mbytes_per_sec": 0, 00:05:51.529 "r_mbytes_per_sec": 0, 00:05:51.529 "w_mbytes_per_sec": 0 00:05:51.529 }, 00:05:51.529 "claimed": false, 00:05:51.529 "zoned": false, 00:05:51.529 "supported_io_types": { 00:05:51.529 "read": true, 00:05:51.529 "write": true, 00:05:51.529 "unmap": true, 00:05:51.529 "flush": true, 00:05:51.529 "reset": true, 00:05:51.529 "nvme_admin": false, 00:05:51.529 "nvme_io": false, 00:05:51.529 "nvme_io_md": false, 00:05:51.529 "write_zeroes": true, 00:05:51.529 "zcopy": false, 00:05:51.529 "get_zone_info": false, 00:05:51.529 "zone_management": false, 00:05:51.529 "zone_append": false, 00:05:51.529 "compare": false, 00:05:51.529 "compare_and_write": false, 00:05:51.529 "abort": false, 00:05:51.529 "seek_hole": false, 00:05:51.529 "seek_data": false, 00:05:51.529 "copy": false, 00:05:51.529 "nvme_iov_md": false 00:05:51.529 }, 00:05:51.529 "memory_domains": [ 00:05:51.529 { 00:05:51.529 "dma_device_id": "system", 00:05:51.529 "dma_device_type": 1 00:05:51.529 }, 00:05:51.529 { 00:05:51.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.529 "dma_device_type": 2 00:05:51.529 }, 00:05:51.529 { 00:05:51.529 "dma_device_id": "system", 00:05:51.529 "dma_device_type": 1 00:05:51.529 }, 00:05:51.529 { 00:05:51.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.529 "dma_device_type": 2 00:05:51.529 } 00:05:51.529 ], 00:05:51.529 "driver_specific": { 00:05:51.529 "raid": { 00:05:51.529 "uuid": "0389bbef-cf48-4a3f-93d7-b2cbb6a3f235", 00:05:51.529 "strip_size_kb": 64, 00:05:51.529 "state": "online", 00:05:51.529 "raid_level": "concat", 00:05:51.529 "superblock": false, 00:05:51.529 "num_base_bdevs": 2, 00:05:51.529 "num_base_bdevs_discovered": 2, 00:05:51.529 "num_base_bdevs_operational": 2, 00:05:51.529 "base_bdevs_list": [ 00:05:51.529 { 00:05:51.529 "name": "BaseBdev1", 00:05:51.529 "uuid": "49377ef4-877c-452b-b1db-2f2333d6a4d3", 00:05:51.529 "is_configured": true, 00:05:51.529 "data_offset": 0, 00:05:51.529 "data_size": 65536 00:05:51.529 }, 00:05:51.529 { 00:05:51.529 "name": "BaseBdev2", 00:05:51.529 "uuid": "c3a21c1e-dd66-4dc7-b133-db9c074c4ff1", 00:05:51.529 "is_configured": true, 00:05:51.529 "data_offset": 0, 00:05:51.529 "data_size": 65536 00:05:51.529 } 00:05:51.529 ] 00:05:51.529 } 00:05:51.529 } 00:05:51.529 }' 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:05:51.529 BaseBdev2' 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.529 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.786 [2024-11-26 19:46:42.464763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:05:51.786 [2024-11-26 19:46:42.464806] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:51.786 [2024-11-26 19:46:42.464865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:51.786 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:51.787 "name": "Existed_Raid", 00:05:51.787 "uuid": "0389bbef-cf48-4a3f-93d7-b2cbb6a3f235", 00:05:51.787 "strip_size_kb": 64, 00:05:51.787 "state": "offline", 00:05:51.787 "raid_level": "concat", 00:05:51.787 "superblock": false, 00:05:51.787 "num_base_bdevs": 2, 00:05:51.787 "num_base_bdevs_discovered": 1, 00:05:51.787 "num_base_bdevs_operational": 1, 00:05:51.787 "base_bdevs_list": [ 00:05:51.787 { 00:05:51.787 "name": null, 00:05:51.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:51.787 "is_configured": false, 00:05:51.787 "data_offset": 0, 00:05:51.787 "data_size": 65536 00:05:51.787 }, 00:05:51.787 { 00:05:51.787 "name": "BaseBdev2", 00:05:51.787 "uuid": "c3a21c1e-dd66-4dc7-b133-db9c074c4ff1", 00:05:51.787 "is_configured": true, 00:05:51.787 "data_offset": 0, 00:05:51.787 "data_size": 65536 00:05:51.787 } 00:05:51.787 ] 00:05:51.787 }' 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:51.787 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.044 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.045 [2024-11-26 19:46:42.897083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:05:52.045 [2024-11-26 19:46:42.897283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.045 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.302 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:05:52.302 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:05:52.302 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:05:52.302 19:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60402 00:05:52.302 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60402 ']' 00:05:52.302 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60402 00:05:52.302 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:05:52.302 19:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.302 19:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60402 00:05:52.302 killing process with pid 60402 00:05:52.302 19:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.302 19:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.302 19:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60402' 00:05:52.302 19:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60402 00:05:52.302 [2024-11-26 19:46:43.026641] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:52.302 19:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60402 00:05:52.302 [2024-11-26 19:46:43.037894] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:52.867 ************************************ 00:05:52.867 END TEST raid_state_function_test 00:05:52.867 ************************************ 00:05:52.867 19:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:05:52.867 00:05:52.867 real 0m3.883s 00:05:52.867 user 0m5.549s 00:05:52.867 sys 0m0.625s 00:05:52.867 19:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.867 19:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:53.157 19:46:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:05:53.157 19:46:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:53.157 19:46:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.157 19:46:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:53.157 ************************************ 00:05:53.157 START TEST raid_state_function_test_sb 00:05:53.157 ************************************ 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:05:53.157 Process raid pid: 60644 00:05:53.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60644 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60644' 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60644 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60644 ']' 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:53.157 19:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:53.157 [2024-11-26 19:46:43.916574] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:53.157 [2024-11-26 19:46:43.916897] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:53.157 [2024-11-26 19:46:44.076743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.424 [2024-11-26 19:46:44.180709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.424 [2024-11-26 19:46:44.304243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:53.424 [2024-11-26 19:46:44.304475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:53.990 [2024-11-26 19:46:44.773057] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:05:53.990 [2024-11-26 19:46:44.773279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:05:53.990 [2024-11-26 19:46:44.773336] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:53.990 [2024-11-26 19:46:44.773362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:53.990 "name": "Existed_Raid", 00:05:53.990 "uuid": "5fc9abf3-ab4a-46b9-b01b-9768f66a40f5", 00:05:53.990 "strip_size_kb": 64, 00:05:53.990 "state": "configuring", 00:05:53.990 "raid_level": "concat", 00:05:53.990 "superblock": true, 00:05:53.990 "num_base_bdevs": 2, 00:05:53.990 "num_base_bdevs_discovered": 0, 00:05:53.990 "num_base_bdevs_operational": 2, 00:05:53.990 "base_bdevs_list": [ 00:05:53.990 { 00:05:53.990 "name": "BaseBdev1", 00:05:53.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:53.990 "is_configured": false, 00:05:53.990 "data_offset": 0, 00:05:53.990 "data_size": 0 00:05:53.990 }, 00:05:53.990 { 00:05:53.990 "name": "BaseBdev2", 00:05:53.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:53.990 "is_configured": false, 00:05:53.990 "data_offset": 0, 00:05:53.990 "data_size": 0 00:05:53.990 } 00:05:53.990 ] 00:05:53.990 }' 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:53.990 19:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:54.248 [2024-11-26 19:46:45.089067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:05:54.248 [2024-11-26 19:46:45.089110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:54.248 [2024-11-26 19:46:45.097068] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:05:54.248 [2024-11-26 19:46:45.097112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:05:54.248 [2024-11-26 19:46:45.097120] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:54.248 [2024-11-26 19:46:45.097132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:54.248 [2024-11-26 19:46:45.127201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:54.248 BaseBdev1 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.248 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:54.248 [ 00:05:54.248 { 00:05:54.248 "name": "BaseBdev1", 00:05:54.248 "aliases": [ 00:05:54.248 "e789a782-ffe6-45a5-a345-81c2886ed9d1" 00:05:54.248 ], 00:05:54.248 "product_name": "Malloc disk", 00:05:54.248 "block_size": 512, 00:05:54.248 "num_blocks": 65536, 00:05:54.248 "uuid": "e789a782-ffe6-45a5-a345-81c2886ed9d1", 00:05:54.248 "assigned_rate_limits": { 00:05:54.248 "rw_ios_per_sec": 0, 00:05:54.248 "rw_mbytes_per_sec": 0, 00:05:54.248 "r_mbytes_per_sec": 0, 00:05:54.248 "w_mbytes_per_sec": 0 00:05:54.248 }, 00:05:54.248 "claimed": true, 00:05:54.248 "claim_type": "exclusive_write", 00:05:54.248 "zoned": false, 00:05:54.248 "supported_io_types": { 00:05:54.248 "read": true, 00:05:54.248 "write": true, 00:05:54.248 "unmap": true, 00:05:54.248 "flush": true, 00:05:54.248 "reset": true, 00:05:54.248 "nvme_admin": false, 00:05:54.248 "nvme_io": false, 00:05:54.248 "nvme_io_md": false, 00:05:54.248 "write_zeroes": true, 00:05:54.248 "zcopy": true, 00:05:54.248 "get_zone_info": false, 00:05:54.248 "zone_management": false, 00:05:54.249 "zone_append": false, 00:05:54.249 "compare": false, 00:05:54.249 "compare_and_write": false, 00:05:54.249 "abort": true, 00:05:54.249 "seek_hole": false, 00:05:54.249 "seek_data": false, 00:05:54.249 "copy": true, 00:05:54.249 "nvme_iov_md": false 00:05:54.249 }, 00:05:54.249 "memory_domains": [ 00:05:54.249 { 00:05:54.249 "dma_device_id": "system", 00:05:54.249 "dma_device_type": 1 00:05:54.249 }, 00:05:54.249 { 00:05:54.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.249 "dma_device_type": 2 00:05:54.249 } 00:05:54.249 ], 00:05:54.249 "driver_specific": {} 00:05:54.249 } 00:05:54.249 ] 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:54.249 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.506 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:54.506 "name": "Existed_Raid", 00:05:54.506 "uuid": "63f57a8b-d85d-4c49-b74e-6d7c9ce247ba", 00:05:54.506 "strip_size_kb": 64, 00:05:54.506 "state": "configuring", 00:05:54.506 "raid_level": "concat", 00:05:54.506 "superblock": true, 00:05:54.506 "num_base_bdevs": 2, 00:05:54.506 "num_base_bdevs_discovered": 1, 00:05:54.506 "num_base_bdevs_operational": 2, 00:05:54.506 "base_bdevs_list": [ 00:05:54.506 { 00:05:54.506 "name": "BaseBdev1", 00:05:54.506 "uuid": "e789a782-ffe6-45a5-a345-81c2886ed9d1", 00:05:54.506 "is_configured": true, 00:05:54.506 "data_offset": 2048, 00:05:54.506 "data_size": 63488 00:05:54.506 }, 00:05:54.507 { 00:05:54.507 "name": "BaseBdev2", 00:05:54.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:54.507 "is_configured": false, 00:05:54.507 "data_offset": 0, 00:05:54.507 "data_size": 0 00:05:54.507 } 00:05:54.507 ] 00:05:54.507 }' 00:05:54.507 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:54.507 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:54.764 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:05:54.764 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.764 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:54.764 [2024-11-26 19:46:45.471317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:05:54.764 [2024-11-26 19:46:45.471388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:05:54.764 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:54.765 [2024-11-26 19:46:45.479372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:54.765 [2024-11-26 19:46:45.481057] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:54.765 [2024-11-26 19:46:45.481091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:54.765 "name": "Existed_Raid", 00:05:54.765 "uuid": "87ae3836-8f40-4011-a5df-fea2db3884fa", 00:05:54.765 "strip_size_kb": 64, 00:05:54.765 "state": "configuring", 00:05:54.765 "raid_level": "concat", 00:05:54.765 "superblock": true, 00:05:54.765 "num_base_bdevs": 2, 00:05:54.765 "num_base_bdevs_discovered": 1, 00:05:54.765 "num_base_bdevs_operational": 2, 00:05:54.765 "base_bdevs_list": [ 00:05:54.765 { 00:05:54.765 "name": "BaseBdev1", 00:05:54.765 "uuid": "e789a782-ffe6-45a5-a345-81c2886ed9d1", 00:05:54.765 "is_configured": true, 00:05:54.765 "data_offset": 2048, 00:05:54.765 "data_size": 63488 00:05:54.765 }, 00:05:54.765 { 00:05:54.765 "name": "BaseBdev2", 00:05:54.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:54.765 "is_configured": false, 00:05:54.765 "data_offset": 0, 00:05:54.765 "data_size": 0 00:05:54.765 } 00:05:54.765 ] 00:05:54.765 }' 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:54.765 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:55.023 [2024-11-26 19:46:45.824260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:05:55.023 [2024-11-26 19:46:45.824523] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:05:55.023 [2024-11-26 19:46:45.824541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:05:55.023 BaseBdev2 00:05:55.023 [2024-11-26 19:46:45.824859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:55.023 [2024-11-26 19:46:45.824985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:05:55.023 [2024-11-26 19:46:45.824995] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:05:55.023 [2024-11-26 19:46:45.825109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:55.023 [ 00:05:55.023 { 00:05:55.023 "name": "BaseBdev2", 00:05:55.023 "aliases": [ 00:05:55.023 "c4531705-4887-4ff3-aa1a-cf9fd9d4375f" 00:05:55.023 ], 00:05:55.023 "product_name": "Malloc disk", 00:05:55.023 "block_size": 512, 00:05:55.023 "num_blocks": 65536, 00:05:55.023 "uuid": "c4531705-4887-4ff3-aa1a-cf9fd9d4375f", 00:05:55.023 "assigned_rate_limits": { 00:05:55.023 "rw_ios_per_sec": 0, 00:05:55.023 "rw_mbytes_per_sec": 0, 00:05:55.023 "r_mbytes_per_sec": 0, 00:05:55.023 "w_mbytes_per_sec": 0 00:05:55.023 }, 00:05:55.023 "claimed": true, 00:05:55.023 "claim_type": "exclusive_write", 00:05:55.023 "zoned": false, 00:05:55.023 "supported_io_types": { 00:05:55.023 "read": true, 00:05:55.023 "write": true, 00:05:55.023 "unmap": true, 00:05:55.023 "flush": true, 00:05:55.023 "reset": true, 00:05:55.023 "nvme_admin": false, 00:05:55.023 "nvme_io": false, 00:05:55.023 "nvme_io_md": false, 00:05:55.023 "write_zeroes": true, 00:05:55.023 "zcopy": true, 00:05:55.023 "get_zone_info": false, 00:05:55.023 "zone_management": false, 00:05:55.023 "zone_append": false, 00:05:55.023 "compare": false, 00:05:55.023 "compare_and_write": false, 00:05:55.023 "abort": true, 00:05:55.023 "seek_hole": false, 00:05:55.023 "seek_data": false, 00:05:55.023 "copy": true, 00:05:55.023 "nvme_iov_md": false 00:05:55.023 }, 00:05:55.023 "memory_domains": [ 00:05:55.023 { 00:05:55.023 "dma_device_id": "system", 00:05:55.023 "dma_device_type": 1 00:05:55.023 }, 00:05:55.023 { 00:05:55.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.023 "dma_device_type": 2 00:05:55.023 } 00:05:55.023 ], 00:05:55.023 "driver_specific": {} 00:05:55.023 } 00:05:55.023 ] 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:05:55.023 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:55.024 "name": "Existed_Raid", 00:05:55.024 "uuid": "87ae3836-8f40-4011-a5df-fea2db3884fa", 00:05:55.024 "strip_size_kb": 64, 00:05:55.024 "state": "online", 00:05:55.024 "raid_level": "concat", 00:05:55.024 "superblock": true, 00:05:55.024 "num_base_bdevs": 2, 00:05:55.024 "num_base_bdevs_discovered": 2, 00:05:55.024 "num_base_bdevs_operational": 2, 00:05:55.024 "base_bdevs_list": [ 00:05:55.024 { 00:05:55.024 "name": "BaseBdev1", 00:05:55.024 "uuid": "e789a782-ffe6-45a5-a345-81c2886ed9d1", 00:05:55.024 "is_configured": true, 00:05:55.024 "data_offset": 2048, 00:05:55.024 "data_size": 63488 00:05:55.024 }, 00:05:55.024 { 00:05:55.024 "name": "BaseBdev2", 00:05:55.024 "uuid": "c4531705-4887-4ff3-aa1a-cf9fd9d4375f", 00:05:55.024 "is_configured": true, 00:05:55.024 "data_offset": 2048, 00:05:55.024 "data_size": 63488 00:05:55.024 } 00:05:55.024 ] 00:05:55.024 }' 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:55.024 19:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:55.282 [2024-11-26 19:46:46.152630] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:05:55.282 "name": "Existed_Raid", 00:05:55.282 "aliases": [ 00:05:55.282 "87ae3836-8f40-4011-a5df-fea2db3884fa" 00:05:55.282 ], 00:05:55.282 "product_name": "Raid Volume", 00:05:55.282 "block_size": 512, 00:05:55.282 "num_blocks": 126976, 00:05:55.282 "uuid": "87ae3836-8f40-4011-a5df-fea2db3884fa", 00:05:55.282 "assigned_rate_limits": { 00:05:55.282 "rw_ios_per_sec": 0, 00:05:55.282 "rw_mbytes_per_sec": 0, 00:05:55.282 "r_mbytes_per_sec": 0, 00:05:55.282 "w_mbytes_per_sec": 0 00:05:55.282 }, 00:05:55.282 "claimed": false, 00:05:55.282 "zoned": false, 00:05:55.282 "supported_io_types": { 00:05:55.282 "read": true, 00:05:55.282 "write": true, 00:05:55.282 "unmap": true, 00:05:55.282 "flush": true, 00:05:55.282 "reset": true, 00:05:55.282 "nvme_admin": false, 00:05:55.282 "nvme_io": false, 00:05:55.282 "nvme_io_md": false, 00:05:55.282 "write_zeroes": true, 00:05:55.282 "zcopy": false, 00:05:55.282 "get_zone_info": false, 00:05:55.282 "zone_management": false, 00:05:55.282 "zone_append": false, 00:05:55.282 "compare": false, 00:05:55.282 "compare_and_write": false, 00:05:55.282 "abort": false, 00:05:55.282 "seek_hole": false, 00:05:55.282 "seek_data": false, 00:05:55.282 "copy": false, 00:05:55.282 "nvme_iov_md": false 00:05:55.282 }, 00:05:55.282 "memory_domains": [ 00:05:55.282 { 00:05:55.282 "dma_device_id": "system", 00:05:55.282 "dma_device_type": 1 00:05:55.282 }, 00:05:55.282 { 00:05:55.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.282 "dma_device_type": 2 00:05:55.282 }, 00:05:55.282 { 00:05:55.282 "dma_device_id": "system", 00:05:55.282 "dma_device_type": 1 00:05:55.282 }, 00:05:55.282 { 00:05:55.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.282 "dma_device_type": 2 00:05:55.282 } 00:05:55.282 ], 00:05:55.282 "driver_specific": { 00:05:55.282 "raid": { 00:05:55.282 "uuid": "87ae3836-8f40-4011-a5df-fea2db3884fa", 00:05:55.282 "strip_size_kb": 64, 00:05:55.282 "state": "online", 00:05:55.282 "raid_level": "concat", 00:05:55.282 "superblock": true, 00:05:55.282 "num_base_bdevs": 2, 00:05:55.282 "num_base_bdevs_discovered": 2, 00:05:55.282 "num_base_bdevs_operational": 2, 00:05:55.282 "base_bdevs_list": [ 00:05:55.282 { 00:05:55.282 "name": "BaseBdev1", 00:05:55.282 "uuid": "e789a782-ffe6-45a5-a345-81c2886ed9d1", 00:05:55.282 "is_configured": true, 00:05:55.282 "data_offset": 2048, 00:05:55.282 "data_size": 63488 00:05:55.282 }, 00:05:55.282 { 00:05:55.282 "name": "BaseBdev2", 00:05:55.282 "uuid": "c4531705-4887-4ff3-aa1a-cf9fd9d4375f", 00:05:55.282 "is_configured": true, 00:05:55.282 "data_offset": 2048, 00:05:55.282 "data_size": 63488 00:05:55.282 } 00:05:55.282 ] 00:05:55.282 } 00:05:55.282 } 00:05:55.282 }' 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:05:55.282 BaseBdev2' 00:05:55.282 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:55.540 [2024-11-26 19:46:46.308462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:05:55.540 [2024-11-26 19:46:46.308499] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:55.540 [2024-11-26 19:46:46.308551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.540 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:55.540 "name": "Existed_Raid", 00:05:55.540 "uuid": "87ae3836-8f40-4011-a5df-fea2db3884fa", 00:05:55.540 "strip_size_kb": 64, 00:05:55.540 "state": "offline", 00:05:55.540 "raid_level": "concat", 00:05:55.540 "superblock": true, 00:05:55.540 "num_base_bdevs": 2, 00:05:55.540 "num_base_bdevs_discovered": 1, 00:05:55.540 "num_base_bdevs_operational": 1, 00:05:55.540 "base_bdevs_list": [ 00:05:55.540 { 00:05:55.540 "name": null, 00:05:55.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:55.540 "is_configured": false, 00:05:55.540 "data_offset": 0, 00:05:55.540 "data_size": 63488 00:05:55.540 }, 00:05:55.540 { 00:05:55.540 "name": "BaseBdev2", 00:05:55.540 "uuid": "c4531705-4887-4ff3-aa1a-cf9fd9d4375f", 00:05:55.540 "is_configured": true, 00:05:55.540 "data_offset": 2048, 00:05:55.541 "data_size": 63488 00:05:55.541 } 00:05:55.541 ] 00:05:55.541 }' 00:05:55.541 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:55.541 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:56.106 [2024-11-26 19:46:46.834709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:05:56.106 [2024-11-26 19:46:46.834916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60644 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60644 ']' 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60644 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60644 00:05:56.106 killing process with pid 60644 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60644' 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60644 00:05:56.106 [2024-11-26 19:46:46.946465] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:56.106 19:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60644 00:05:56.106 [2024-11-26 19:46:46.955587] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:56.671 19:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:05:56.671 ************************************ 00:05:56.671 END TEST raid_state_function_test_sb 00:05:56.671 ************************************ 00:05:56.671 00:05:56.671 real 0m3.733s 00:05:56.671 user 0m5.447s 00:05:56.671 sys 0m0.614s 00:05:56.671 19:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.671 19:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:56.928 19:46:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:05:56.928 19:46:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:56.928 19:46:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.928 19:46:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:56.928 ************************************ 00:05:56.928 START TEST raid_superblock_test 00:05:56.928 ************************************ 00:05:56.928 19:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60879 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60879 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60879 ']' 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:05:56.929 19:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:56.929 [2024-11-26 19:46:47.692006] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:56.929 [2024-11-26 19:46:47.692141] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60879 ] 00:05:56.929 [2024-11-26 19:46:47.854221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.187 [2024-11-26 19:46:47.972514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.187 [2024-11-26 19:46:48.119920] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:57.187 [2024-11-26 19:46:48.119986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:05:57.754 malloc1 00:05:57.754 pt1 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.754 [2024-11-26 19:46:48.494005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:05:57.754 [2024-11-26 19:46:48.494069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:57.754 [2024-11-26 19:46:48.494092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:05:57.754 [2024-11-26 19:46:48.494102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:57.754 [2024-11-26 19:46:48.496451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:57.754 [2024-11-26 19:46:48.496480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.754 malloc2 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.754 [2024-11-26 19:46:48.531976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:05:57.754 [2024-11-26 19:46:48.532020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:57.754 [2024-11-26 19:46:48.532045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:05:57.754 [2024-11-26 19:46:48.532054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:57.754 [2024-11-26 19:46:48.534269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:57.754 [2024-11-26 19:46:48.534297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:05:57.754 pt2 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.754 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.754 [2024-11-26 19:46:48.540030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:05:57.754 [2024-11-26 19:46:48.541996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:05:57.754 [2024-11-26 19:46:48.542152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:57.754 [2024-11-26 19:46:48.542163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:05:57.754 [2024-11-26 19:46:48.542429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:57.754 [2024-11-26 19:46:48.542574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:57.755 [2024-11-26 19:46:48.542585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:05:57.755 [2024-11-26 19:46:48.542719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:57.755 "name": "raid_bdev1", 00:05:57.755 "uuid": "a0992610-9077-4a39-8e70-b24709c7442a", 00:05:57.755 "strip_size_kb": 64, 00:05:57.755 "state": "online", 00:05:57.755 "raid_level": "concat", 00:05:57.755 "superblock": true, 00:05:57.755 "num_base_bdevs": 2, 00:05:57.755 "num_base_bdevs_discovered": 2, 00:05:57.755 "num_base_bdevs_operational": 2, 00:05:57.755 "base_bdevs_list": [ 00:05:57.755 { 00:05:57.755 "name": "pt1", 00:05:57.755 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:57.755 "is_configured": true, 00:05:57.755 "data_offset": 2048, 00:05:57.755 "data_size": 63488 00:05:57.755 }, 00:05:57.755 { 00:05:57.755 "name": "pt2", 00:05:57.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:57.755 "is_configured": true, 00:05:57.755 "data_offset": 2048, 00:05:57.755 "data_size": 63488 00:05:57.755 } 00:05:57.755 ] 00:05:57.755 }' 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:57.755 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.013 [2024-11-26 19:46:48.828411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:05:58.013 "name": "raid_bdev1", 00:05:58.013 "aliases": [ 00:05:58.013 "a0992610-9077-4a39-8e70-b24709c7442a" 00:05:58.013 ], 00:05:58.013 "product_name": "Raid Volume", 00:05:58.013 "block_size": 512, 00:05:58.013 "num_blocks": 126976, 00:05:58.013 "uuid": "a0992610-9077-4a39-8e70-b24709c7442a", 00:05:58.013 "assigned_rate_limits": { 00:05:58.013 "rw_ios_per_sec": 0, 00:05:58.013 "rw_mbytes_per_sec": 0, 00:05:58.013 "r_mbytes_per_sec": 0, 00:05:58.013 "w_mbytes_per_sec": 0 00:05:58.013 }, 00:05:58.013 "claimed": false, 00:05:58.013 "zoned": false, 00:05:58.013 "supported_io_types": { 00:05:58.013 "read": true, 00:05:58.013 "write": true, 00:05:58.013 "unmap": true, 00:05:58.013 "flush": true, 00:05:58.013 "reset": true, 00:05:58.013 "nvme_admin": false, 00:05:58.013 "nvme_io": false, 00:05:58.013 "nvme_io_md": false, 00:05:58.013 "write_zeroes": true, 00:05:58.013 "zcopy": false, 00:05:58.013 "get_zone_info": false, 00:05:58.013 "zone_management": false, 00:05:58.013 "zone_append": false, 00:05:58.013 "compare": false, 00:05:58.013 "compare_and_write": false, 00:05:58.013 "abort": false, 00:05:58.013 "seek_hole": false, 00:05:58.013 "seek_data": false, 00:05:58.013 "copy": false, 00:05:58.013 "nvme_iov_md": false 00:05:58.013 }, 00:05:58.013 "memory_domains": [ 00:05:58.013 { 00:05:58.013 "dma_device_id": "system", 00:05:58.013 "dma_device_type": 1 00:05:58.013 }, 00:05:58.013 { 00:05:58.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.013 "dma_device_type": 2 00:05:58.013 }, 00:05:58.013 { 00:05:58.013 "dma_device_id": "system", 00:05:58.013 "dma_device_type": 1 00:05:58.013 }, 00:05:58.013 { 00:05:58.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.013 "dma_device_type": 2 00:05:58.013 } 00:05:58.013 ], 00:05:58.013 "driver_specific": { 00:05:58.013 "raid": { 00:05:58.013 "uuid": "a0992610-9077-4a39-8e70-b24709c7442a", 00:05:58.013 "strip_size_kb": 64, 00:05:58.013 "state": "online", 00:05:58.013 "raid_level": "concat", 00:05:58.013 "superblock": true, 00:05:58.013 "num_base_bdevs": 2, 00:05:58.013 "num_base_bdevs_discovered": 2, 00:05:58.013 "num_base_bdevs_operational": 2, 00:05:58.013 "base_bdevs_list": [ 00:05:58.013 { 00:05:58.013 "name": "pt1", 00:05:58.013 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:58.013 "is_configured": true, 00:05:58.013 "data_offset": 2048, 00:05:58.013 "data_size": 63488 00:05:58.013 }, 00:05:58.013 { 00:05:58.013 "name": "pt2", 00:05:58.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:58.013 "is_configured": true, 00:05:58.013 "data_offset": 2048, 00:05:58.013 "data_size": 63488 00:05:58.013 } 00:05:58.013 ] 00:05:58.013 } 00:05:58.013 } 00:05:58.013 }' 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:05:58.013 pt2' 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.013 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.272 [2024-11-26 19:46:48.956394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a0992610-9077-4a39-8e70-b24709c7442a 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a0992610-9077-4a39-8e70-b24709c7442a ']' 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.272 [2024-11-26 19:46:48.976102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:05:58.272 [2024-11-26 19:46:48.976128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:58.272 [2024-11-26 19:46:48.976210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:58.272 [2024-11-26 19:46:48.976264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:58.272 [2024-11-26 19:46:48.976276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.272 19:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.272 [2024-11-26 19:46:49.064151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:05:58.272 [2024-11-26 19:46:49.066138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:05:58.272 [2024-11-26 19:46:49.066206] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:05:58.272 [2024-11-26 19:46:49.066258] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:05:58.272 [2024-11-26 19:46:49.066273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:05:58.272 [2024-11-26 19:46:49.066285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:05:58.272 request: 00:05:58.272 { 00:05:58.272 "name": "raid_bdev1", 00:05:58.272 "raid_level": "concat", 00:05:58.272 "base_bdevs": [ 00:05:58.272 "malloc1", 00:05:58.272 "malloc2" 00:05:58.272 ], 00:05:58.272 "strip_size_kb": 64, 00:05:58.272 "superblock": false, 00:05:58.272 "method": "bdev_raid_create", 00:05:58.272 "req_id": 1 00:05:58.272 } 00:05:58.272 Got JSON-RPC error response 00:05:58.272 response: 00:05:58.272 { 00:05:58.272 "code": -17, 00:05:58.272 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:05:58.272 } 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.272 [2024-11-26 19:46:49.108155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:05:58.272 [2024-11-26 19:46:49.108214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.272 [2024-11-26 19:46:49.108232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:58.272 [2024-11-26 19:46:49.108244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.272 [2024-11-26 19:46:49.110607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.272 [2024-11-26 19:46:49.110637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:05:58.272 [2024-11-26 19:46:49.110721] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:05:58.272 [2024-11-26 19:46:49.110775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:05:58.272 pt1 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.272 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.273 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:58.273 "name": "raid_bdev1", 00:05:58.273 "uuid": "a0992610-9077-4a39-8e70-b24709c7442a", 00:05:58.273 "strip_size_kb": 64, 00:05:58.273 "state": "configuring", 00:05:58.273 "raid_level": "concat", 00:05:58.273 "superblock": true, 00:05:58.273 "num_base_bdevs": 2, 00:05:58.273 "num_base_bdevs_discovered": 1, 00:05:58.273 "num_base_bdevs_operational": 2, 00:05:58.273 "base_bdevs_list": [ 00:05:58.273 { 00:05:58.273 "name": "pt1", 00:05:58.273 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:58.273 "is_configured": true, 00:05:58.273 "data_offset": 2048, 00:05:58.273 "data_size": 63488 00:05:58.273 }, 00:05:58.273 { 00:05:58.273 "name": null, 00:05:58.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:58.273 "is_configured": false, 00:05:58.273 "data_offset": 2048, 00:05:58.273 "data_size": 63488 00:05:58.273 } 00:05:58.273 ] 00:05:58.273 }' 00:05:58.273 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:58.273 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.530 [2024-11-26 19:46:49.424257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:05:58.530 [2024-11-26 19:46:49.424327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.530 [2024-11-26 19:46:49.424359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:05:58.530 [2024-11-26 19:46:49.424371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.530 [2024-11-26 19:46:49.424833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.530 [2024-11-26 19:46:49.424856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:05:58.530 [2024-11-26 19:46:49.424936] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:05:58.530 [2024-11-26 19:46:49.424962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:05:58.530 [2024-11-26 19:46:49.425074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:05:58.530 [2024-11-26 19:46:49.425086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:05:58.530 [2024-11-26 19:46:49.425334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:05:58.530 [2024-11-26 19:46:49.425475] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:05:58.530 [2024-11-26 19:46:49.425485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:05:58.530 [2024-11-26 19:46:49.425612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:58.530 pt2 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.530 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:58.530 "name": "raid_bdev1", 00:05:58.530 "uuid": "a0992610-9077-4a39-8e70-b24709c7442a", 00:05:58.530 "strip_size_kb": 64, 00:05:58.530 "state": "online", 00:05:58.530 "raid_level": "concat", 00:05:58.530 "superblock": true, 00:05:58.530 "num_base_bdevs": 2, 00:05:58.530 "num_base_bdevs_discovered": 2, 00:05:58.530 "num_base_bdevs_operational": 2, 00:05:58.530 "base_bdevs_list": [ 00:05:58.530 { 00:05:58.530 "name": "pt1", 00:05:58.530 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:58.530 "is_configured": true, 00:05:58.530 "data_offset": 2048, 00:05:58.530 "data_size": 63488 00:05:58.530 }, 00:05:58.531 { 00:05:58.531 "name": "pt2", 00:05:58.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:58.531 "is_configured": true, 00:05:58.531 "data_offset": 2048, 00:05:58.531 "data_size": 63488 00:05:58.531 } 00:05:58.531 ] 00:05:58.531 }' 00:05:58.531 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:58.531 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:59.096 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:05:59.096 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:05:59.096 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:05:59.096 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:05:59.096 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:05:59.096 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:05:59.096 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:05:59.096 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:05:59.096 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.096 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:59.096 [2024-11-26 19:46:49.772627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:59.096 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.096 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:05:59.096 "name": "raid_bdev1", 00:05:59.096 "aliases": [ 00:05:59.096 "a0992610-9077-4a39-8e70-b24709c7442a" 00:05:59.096 ], 00:05:59.096 "product_name": "Raid Volume", 00:05:59.096 "block_size": 512, 00:05:59.096 "num_blocks": 126976, 00:05:59.096 "uuid": "a0992610-9077-4a39-8e70-b24709c7442a", 00:05:59.096 "assigned_rate_limits": { 00:05:59.096 "rw_ios_per_sec": 0, 00:05:59.096 "rw_mbytes_per_sec": 0, 00:05:59.096 "r_mbytes_per_sec": 0, 00:05:59.096 "w_mbytes_per_sec": 0 00:05:59.096 }, 00:05:59.096 "claimed": false, 00:05:59.096 "zoned": false, 00:05:59.096 "supported_io_types": { 00:05:59.096 "read": true, 00:05:59.096 "write": true, 00:05:59.096 "unmap": true, 00:05:59.096 "flush": true, 00:05:59.096 "reset": true, 00:05:59.096 "nvme_admin": false, 00:05:59.096 "nvme_io": false, 00:05:59.096 "nvme_io_md": false, 00:05:59.096 "write_zeroes": true, 00:05:59.096 "zcopy": false, 00:05:59.096 "get_zone_info": false, 00:05:59.096 "zone_management": false, 00:05:59.096 "zone_append": false, 00:05:59.096 "compare": false, 00:05:59.096 "compare_and_write": false, 00:05:59.096 "abort": false, 00:05:59.096 "seek_hole": false, 00:05:59.096 "seek_data": false, 00:05:59.096 "copy": false, 00:05:59.096 "nvme_iov_md": false 00:05:59.096 }, 00:05:59.096 "memory_domains": [ 00:05:59.096 { 00:05:59.096 "dma_device_id": "system", 00:05:59.097 "dma_device_type": 1 00:05:59.097 }, 00:05:59.097 { 00:05:59.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.097 "dma_device_type": 2 00:05:59.097 }, 00:05:59.097 { 00:05:59.097 "dma_device_id": "system", 00:05:59.097 "dma_device_type": 1 00:05:59.097 }, 00:05:59.097 { 00:05:59.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.097 "dma_device_type": 2 00:05:59.097 } 00:05:59.097 ], 00:05:59.097 "driver_specific": { 00:05:59.097 "raid": { 00:05:59.097 "uuid": "a0992610-9077-4a39-8e70-b24709c7442a", 00:05:59.097 "strip_size_kb": 64, 00:05:59.097 "state": "online", 00:05:59.097 "raid_level": "concat", 00:05:59.097 "superblock": true, 00:05:59.097 "num_base_bdevs": 2, 00:05:59.097 "num_base_bdevs_discovered": 2, 00:05:59.097 "num_base_bdevs_operational": 2, 00:05:59.097 "base_bdevs_list": [ 00:05:59.097 { 00:05:59.097 "name": "pt1", 00:05:59.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:59.097 "is_configured": true, 00:05:59.097 "data_offset": 2048, 00:05:59.097 "data_size": 63488 00:05:59.097 }, 00:05:59.097 { 00:05:59.097 "name": "pt2", 00:05:59.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:59.097 "is_configured": true, 00:05:59.097 "data_offset": 2048, 00:05:59.097 "data_size": 63488 00:05:59.097 } 00:05:59.097 ] 00:05:59.097 } 00:05:59.097 } 00:05:59.097 }' 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:05:59.097 pt2' 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:05:59.097 [2024-11-26 19:46:49.920638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a0992610-9077-4a39-8e70-b24709c7442a '!=' a0992610-9077-4a39-8e70-b24709c7442a ']' 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60879 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60879 ']' 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60879 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60879 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.097 killing process with pid 60879 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60879' 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 60879 00:05:59.097 [2024-11-26 19:46:49.975539] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:59.097 19:46:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 60879 00:05:59.097 [2024-11-26 19:46:49.975636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:59.097 [2024-11-26 19:46:49.975684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:59.097 [2024-11-26 19:46:49.975694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:05:59.355 [2024-11-26 19:46:50.084066] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:59.955 19:46:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:05:59.955 00:05:59.955 real 0m3.076s 00:05:59.955 user 0m4.229s 00:05:59.955 sys 0m0.571s 00:05:59.955 19:46:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.955 19:46:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:59.955 ************************************ 00:05:59.955 END TEST raid_superblock_test 00:05:59.955 ************************************ 00:05:59.955 19:46:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:05:59.955 19:46:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:59.955 19:46:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.955 19:46:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:59.955 ************************************ 00:05:59.955 START TEST raid_read_error_test 00:05:59.955 ************************************ 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cSUeVjZor9 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61074 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61074 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61074 ']' 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:59.955 19:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:05:59.955 [2024-11-26 19:46:50.819710] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:59.955 [2024-11-26 19:46:50.819829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61074 ] 00:06:00.214 [2024-11-26 19:46:50.977433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.214 [2024-11-26 19:46:51.079172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.472 [2024-11-26 19:46:51.198938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:00.472 [2024-11-26 19:46:51.198992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:00.730 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.730 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:00.730 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:00.730 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:00.730 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.730 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.988 BaseBdev1_malloc 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.988 true 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.988 [2024-11-26 19:46:51.697095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:00.988 [2024-11-26 19:46:51.697149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:00.988 [2024-11-26 19:46:51.697168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:00.988 [2024-11-26 19:46:51.697178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:00.988 [2024-11-26 19:46:51.699045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:00.988 [2024-11-26 19:46:51.699076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:00.988 BaseBdev1 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.988 BaseBdev2_malloc 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.988 true 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.988 [2024-11-26 19:46:51.738453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:00.988 [2024-11-26 19:46:51.738498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:00.988 [2024-11-26 19:46:51.738512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:00.988 [2024-11-26 19:46:51.738521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:00.988 [2024-11-26 19:46:51.740369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:00.988 [2024-11-26 19:46:51.740395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:00.988 BaseBdev2 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.988 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.988 [2024-11-26 19:46:51.746515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:00.988 [2024-11-26 19:46:51.748173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:00.988 [2024-11-26 19:46:51.748351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:00.988 [2024-11-26 19:46:51.748364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:00.988 [2024-11-26 19:46:51.748573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:00.989 [2024-11-26 19:46:51.748709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:00.989 [2024-11-26 19:46:51.748722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:00.989 [2024-11-26 19:46:51.748837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:00.989 "name": "raid_bdev1", 00:06:00.989 "uuid": "8e7befaf-e6f8-4df4-9f0b-2975b33625f2", 00:06:00.989 "strip_size_kb": 64, 00:06:00.989 "state": "online", 00:06:00.989 "raid_level": "concat", 00:06:00.989 "superblock": true, 00:06:00.989 "num_base_bdevs": 2, 00:06:00.989 "num_base_bdevs_discovered": 2, 00:06:00.989 "num_base_bdevs_operational": 2, 00:06:00.989 "base_bdevs_list": [ 00:06:00.989 { 00:06:00.989 "name": "BaseBdev1", 00:06:00.989 "uuid": "bf9eb57b-db38-50cb-891d-d5207bf5b537", 00:06:00.989 "is_configured": true, 00:06:00.989 "data_offset": 2048, 00:06:00.989 "data_size": 63488 00:06:00.989 }, 00:06:00.989 { 00:06:00.989 "name": "BaseBdev2", 00:06:00.989 "uuid": "6ffe3702-5f1c-5fe1-aef3-0f781e4e3a69", 00:06:00.989 "is_configured": true, 00:06:00.989 "data_offset": 2048, 00:06:00.989 "data_size": 63488 00:06:00.989 } 00:06:00.989 ] 00:06:00.989 }' 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:00.989 19:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.246 19:46:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:01.246 19:46:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:01.246 [2024-11-26 19:46:52.147471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:02.181 "name": "raid_bdev1", 00:06:02.181 "uuid": "8e7befaf-e6f8-4df4-9f0b-2975b33625f2", 00:06:02.181 "strip_size_kb": 64, 00:06:02.181 "state": "online", 00:06:02.181 "raid_level": "concat", 00:06:02.181 "superblock": true, 00:06:02.181 "num_base_bdevs": 2, 00:06:02.181 "num_base_bdevs_discovered": 2, 00:06:02.181 "num_base_bdevs_operational": 2, 00:06:02.181 "base_bdevs_list": [ 00:06:02.181 { 00:06:02.181 "name": "BaseBdev1", 00:06:02.181 "uuid": "bf9eb57b-db38-50cb-891d-d5207bf5b537", 00:06:02.181 "is_configured": true, 00:06:02.181 "data_offset": 2048, 00:06:02.181 "data_size": 63488 00:06:02.181 }, 00:06:02.181 { 00:06:02.181 "name": "BaseBdev2", 00:06:02.181 "uuid": "6ffe3702-5f1c-5fe1-aef3-0f781e4e3a69", 00:06:02.181 "is_configured": true, 00:06:02.181 "data_offset": 2048, 00:06:02.181 "data_size": 63488 00:06:02.181 } 00:06:02.181 ] 00:06:02.181 }' 00:06:02.181 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:02.182 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.440 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:02.440 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.440 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.440 [2024-11-26 19:46:53.364412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:02.440 [2024-11-26 19:46:53.364450] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:02.440 [2024-11-26 19:46:53.367036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:02.440 [2024-11-26 19:46:53.367078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:02.440 [2024-11-26 19:46:53.367110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:02.440 [2024-11-26 19:46:53.367122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:02.440 { 00:06:02.440 "results": [ 00:06:02.440 { 00:06:02.440 "job": "raid_bdev1", 00:06:02.440 "core_mask": "0x1", 00:06:02.440 "workload": "randrw", 00:06:02.440 "percentage": 50, 00:06:02.440 "status": "finished", 00:06:02.440 "queue_depth": 1, 00:06:02.440 "io_size": 131072, 00:06:02.440 "runtime": 1.215341, 00:06:02.440 "iops": 16754.968358674643, 00:06:02.440 "mibps": 2094.3710448343304, 00:06:02.440 "io_failed": 1, 00:06:02.441 "io_timeout": 0, 00:06:02.441 "avg_latency_us": 82.2534947040781, 00:06:02.441 "min_latency_us": 25.796923076923076, 00:06:02.441 "max_latency_us": 1411.5446153846153 00:06:02.441 } 00:06:02.441 ], 00:06:02.441 "core_count": 1 00:06:02.441 } 00:06:02.441 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.441 19:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61074 00:06:02.441 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61074 ']' 00:06:02.441 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61074 00:06:02.441 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:06:02.441 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.698 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61074 00:06:02.698 killing process with pid 61074 00:06:02.698 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.698 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.698 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61074' 00:06:02.698 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61074 00:06:02.698 [2024-11-26 19:46:53.398798] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:02.698 19:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61074 00:06:02.698 [2024-11-26 19:46:53.469887] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:03.264 19:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cSUeVjZor9 00:06:03.264 19:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:03.264 19:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:03.264 19:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.82 00:06:03.264 19:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:06:03.264 19:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:03.264 19:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:03.264 19:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.82 != \0\.\0\0 ]] 00:06:03.264 00:06:03.264 real 0m3.371s 00:06:03.264 user 0m4.018s 00:06:03.264 sys 0m0.413s 00:06:03.264 19:46:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.264 ************************************ 00:06:03.264 END TEST raid_read_error_test 00:06:03.264 ************************************ 00:06:03.264 19:46:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.264 19:46:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:06:03.264 19:46:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:03.264 19:46:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.264 19:46:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:03.264 ************************************ 00:06:03.264 START TEST raid_write_error_test 00:06:03.264 ************************************ 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.P2Vt6GiOjd 00:06:03.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61209 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61209 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61209 ']' 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.264 19:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:03.522 [2024-11-26 19:46:54.235938] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:03.522 [2024-11-26 19:46:54.236268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61209 ] 00:06:03.522 [2024-11-26 19:46:54.395477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.780 [2024-11-26 19:46:54.514511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.780 [2024-11-26 19:46:54.661313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:03.780 [2024-11-26 19:46:54.661369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.346 BaseBdev1_malloc 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.346 true 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.346 [2024-11-26 19:46:55.120186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:04.346 [2024-11-26 19:46:55.120251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:04.346 [2024-11-26 19:46:55.120273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:04.346 [2024-11-26 19:46:55.120285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:04.346 [2024-11-26 19:46:55.122588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:04.346 [2024-11-26 19:46:55.122629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:04.346 BaseBdev1 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.346 BaseBdev2_malloc 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.346 true 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.346 [2024-11-26 19:46:55.170445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:04.346 [2024-11-26 19:46:55.170520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:04.346 [2024-11-26 19:46:55.170542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:04.346 [2024-11-26 19:46:55.170553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:04.346 [2024-11-26 19:46:55.172938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:04.346 [2024-11-26 19:46:55.172983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:04.346 BaseBdev2 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.346 [2024-11-26 19:46:55.178504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:04.346 [2024-11-26 19:46:55.180553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:04.346 [2024-11-26 19:46:55.180768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:04.346 [2024-11-26 19:46:55.180783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:04.346 [2024-11-26 19:46:55.181073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:04.346 [2024-11-26 19:46:55.181240] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:04.346 [2024-11-26 19:46:55.181251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:04.346 [2024-11-26 19:46:55.181438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:04.346 "name": "raid_bdev1", 00:06:04.346 "uuid": "70d3488d-1bff-4085-b4c4-e04f063391ab", 00:06:04.346 "strip_size_kb": 64, 00:06:04.346 "state": "online", 00:06:04.346 "raid_level": "concat", 00:06:04.346 "superblock": true, 00:06:04.346 "num_base_bdevs": 2, 00:06:04.346 "num_base_bdevs_discovered": 2, 00:06:04.346 "num_base_bdevs_operational": 2, 00:06:04.346 "base_bdevs_list": [ 00:06:04.346 { 00:06:04.346 "name": "BaseBdev1", 00:06:04.346 "uuid": "c45b570c-207b-5fb1-9095-824275fe05cc", 00:06:04.346 "is_configured": true, 00:06:04.346 "data_offset": 2048, 00:06:04.346 "data_size": 63488 00:06:04.346 }, 00:06:04.346 { 00:06:04.346 "name": "BaseBdev2", 00:06:04.346 "uuid": "b4edbf28-da82-59bd-8171-f6cf17cee99a", 00:06:04.346 "is_configured": true, 00:06:04.346 "data_offset": 2048, 00:06:04.346 "data_size": 63488 00:06:04.346 } 00:06:04.346 ] 00:06:04.346 }' 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:04.346 19:46:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.604 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:04.604 19:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:04.861 [2024-11-26 19:46:55.575611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:05.796 "name": "raid_bdev1", 00:06:05.796 "uuid": "70d3488d-1bff-4085-b4c4-e04f063391ab", 00:06:05.796 "strip_size_kb": 64, 00:06:05.796 "state": "online", 00:06:05.796 "raid_level": "concat", 00:06:05.796 "superblock": true, 00:06:05.796 "num_base_bdevs": 2, 00:06:05.796 "num_base_bdevs_discovered": 2, 00:06:05.796 "num_base_bdevs_operational": 2, 00:06:05.796 "base_bdevs_list": [ 00:06:05.796 { 00:06:05.796 "name": "BaseBdev1", 00:06:05.796 "uuid": "c45b570c-207b-5fb1-9095-824275fe05cc", 00:06:05.796 "is_configured": true, 00:06:05.796 "data_offset": 2048, 00:06:05.796 "data_size": 63488 00:06:05.796 }, 00:06:05.796 { 00:06:05.796 "name": "BaseBdev2", 00:06:05.796 "uuid": "b4edbf28-da82-59bd-8171-f6cf17cee99a", 00:06:05.796 "is_configured": true, 00:06:05.796 "data_offset": 2048, 00:06:05.796 "data_size": 63488 00:06:05.796 } 00:06:05.796 ] 00:06:05.796 }' 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:05.796 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.054 [2024-11-26 19:46:56.825769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:06.054 [2024-11-26 19:46:56.825962] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:06.054 [2024-11-26 19:46:56.829114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:06.054 [2024-11-26 19:46:56.829160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:06.054 [2024-11-26 19:46:56.829195] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:06.054 [2024-11-26 19:46:56.829206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:06.054 { 00:06:06.054 "results": [ 00:06:06.054 { 00:06:06.054 "job": "raid_bdev1", 00:06:06.054 "core_mask": "0x1", 00:06:06.054 "workload": "randrw", 00:06:06.054 "percentage": 50, 00:06:06.054 "status": "finished", 00:06:06.054 "queue_depth": 1, 00:06:06.054 "io_size": 131072, 00:06:06.054 "runtime": 1.248375, 00:06:06.054 "iops": 14004.60598778412, 00:06:06.054 "mibps": 1750.575748473015, 00:06:06.054 "io_failed": 1, 00:06:06.054 "io_timeout": 0, 00:06:06.054 "avg_latency_us": 98.08681695792198, 00:06:06.054 "min_latency_us": 33.28, 00:06:06.054 "max_latency_us": 1751.8276923076924 00:06:06.054 } 00:06:06.054 ], 00:06:06.054 "core_count": 1 00:06:06.054 } 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61209 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61209 ']' 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61209 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61209 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61209' 00:06:06.054 killing process with pid 61209 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61209 00:06:06.054 [2024-11-26 19:46:56.859251] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:06.054 19:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61209 00:06:06.054 [2024-11-26 19:46:56.950658] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:06.987 19:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.P2Vt6GiOjd 00:06:06.987 19:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:06.987 19:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:06.987 19:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:06:06.987 19:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:06:06.987 19:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:06.987 19:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:06.987 ************************************ 00:06:06.987 END TEST raid_write_error_test 00:06:06.987 ************************************ 00:06:06.987 19:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:06:06.987 00:06:06.987 real 0m3.606s 00:06:06.987 user 0m4.268s 00:06:06.987 sys 0m0.415s 00:06:06.987 19:46:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.987 19:46:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.987 19:46:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:06.987 19:46:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:06:06.987 19:46:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:06.987 19:46:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.987 19:46:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:06.987 ************************************ 00:06:06.987 START TEST raid_state_function_test 00:06:06.987 ************************************ 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:06.987 Process raid pid: 61341 00:06:06.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61341 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61341' 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61341 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61341 ']' 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.987 19:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:06.988 [2024-11-26 19:46:57.875363] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:06.988 [2024-11-26 19:46:57.875678] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:07.245 [2024-11-26 19:46:58.041815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.245 [2024-11-26 19:46:58.160539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.504 [2024-11-26 19:46:58.309544] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:07.504 [2024-11-26 19:46:58.309607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:08.069 19:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.069 19:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:08.069 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:08.069 19:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.070 [2024-11-26 19:46:58.726243] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:08.070 [2024-11-26 19:46:58.726303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:08.070 [2024-11-26 19:46:58.726314] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:08.070 [2024-11-26 19:46:58.726326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:08.070 "name": "Existed_Raid", 00:06:08.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:08.070 "strip_size_kb": 0, 00:06:08.070 "state": "configuring", 00:06:08.070 "raid_level": "raid1", 00:06:08.070 "superblock": false, 00:06:08.070 "num_base_bdevs": 2, 00:06:08.070 "num_base_bdevs_discovered": 0, 00:06:08.070 "num_base_bdevs_operational": 2, 00:06:08.070 "base_bdevs_list": [ 00:06:08.070 { 00:06:08.070 "name": "BaseBdev1", 00:06:08.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:08.070 "is_configured": false, 00:06:08.070 "data_offset": 0, 00:06:08.070 "data_size": 0 00:06:08.070 }, 00:06:08.070 { 00:06:08.070 "name": "BaseBdev2", 00:06:08.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:08.070 "is_configured": false, 00:06:08.070 "data_offset": 0, 00:06:08.070 "data_size": 0 00:06:08.070 } 00:06:08.070 ] 00:06:08.070 }' 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:08.070 19:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.328 [2024-11-26 19:46:59.050251] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:08.328 [2024-11-26 19:46:59.050288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.328 [2024-11-26 19:46:59.058246] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:08.328 [2024-11-26 19:46:59.058290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:08.328 [2024-11-26 19:46:59.058299] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:08.328 [2024-11-26 19:46:59.058312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.328 [2024-11-26 19:46:59.093055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:08.328 BaseBdev1 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.328 [ 00:06:08.328 { 00:06:08.328 "name": "BaseBdev1", 00:06:08.328 "aliases": [ 00:06:08.328 "89774267-3646-416c-8f91-0b112e105659" 00:06:08.328 ], 00:06:08.328 "product_name": "Malloc disk", 00:06:08.328 "block_size": 512, 00:06:08.328 "num_blocks": 65536, 00:06:08.328 "uuid": "89774267-3646-416c-8f91-0b112e105659", 00:06:08.328 "assigned_rate_limits": { 00:06:08.328 "rw_ios_per_sec": 0, 00:06:08.328 "rw_mbytes_per_sec": 0, 00:06:08.328 "r_mbytes_per_sec": 0, 00:06:08.328 "w_mbytes_per_sec": 0 00:06:08.328 }, 00:06:08.328 "claimed": true, 00:06:08.328 "claim_type": "exclusive_write", 00:06:08.328 "zoned": false, 00:06:08.328 "supported_io_types": { 00:06:08.328 "read": true, 00:06:08.328 "write": true, 00:06:08.328 "unmap": true, 00:06:08.328 "flush": true, 00:06:08.328 "reset": true, 00:06:08.328 "nvme_admin": false, 00:06:08.328 "nvme_io": false, 00:06:08.328 "nvme_io_md": false, 00:06:08.328 "write_zeroes": true, 00:06:08.328 "zcopy": true, 00:06:08.328 "get_zone_info": false, 00:06:08.328 "zone_management": false, 00:06:08.328 "zone_append": false, 00:06:08.328 "compare": false, 00:06:08.328 "compare_and_write": false, 00:06:08.328 "abort": true, 00:06:08.328 "seek_hole": false, 00:06:08.328 "seek_data": false, 00:06:08.328 "copy": true, 00:06:08.328 "nvme_iov_md": false 00:06:08.328 }, 00:06:08.328 "memory_domains": [ 00:06:08.328 { 00:06:08.328 "dma_device_id": "system", 00:06:08.328 "dma_device_type": 1 00:06:08.328 }, 00:06:08.328 { 00:06:08.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.328 "dma_device_type": 2 00:06:08.328 } 00:06:08.328 ], 00:06:08.328 "driver_specific": {} 00:06:08.328 } 00:06:08.328 ] 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:08.328 "name": "Existed_Raid", 00:06:08.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:08.328 "strip_size_kb": 0, 00:06:08.328 "state": "configuring", 00:06:08.328 "raid_level": "raid1", 00:06:08.328 "superblock": false, 00:06:08.328 "num_base_bdevs": 2, 00:06:08.328 "num_base_bdevs_discovered": 1, 00:06:08.328 "num_base_bdevs_operational": 2, 00:06:08.328 "base_bdevs_list": [ 00:06:08.328 { 00:06:08.328 "name": "BaseBdev1", 00:06:08.328 "uuid": "89774267-3646-416c-8f91-0b112e105659", 00:06:08.328 "is_configured": true, 00:06:08.328 "data_offset": 0, 00:06:08.328 "data_size": 65536 00:06:08.328 }, 00:06:08.328 { 00:06:08.328 "name": "BaseBdev2", 00:06:08.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:08.328 "is_configured": false, 00:06:08.328 "data_offset": 0, 00:06:08.328 "data_size": 0 00:06:08.328 } 00:06:08.328 ] 00:06:08.328 }' 00:06:08.328 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:08.329 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.586 [2024-11-26 19:46:59.437164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:08.586 [2024-11-26 19:46:59.437217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.586 [2024-11-26 19:46:59.445202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:08.586 [2024-11-26 19:46:59.446909] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:08.586 [2024-11-26 19:46:59.446957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:08.586 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:08.587 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:08.587 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:08.587 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:08.587 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:08.587 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:08.587 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.587 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.587 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.587 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:08.587 "name": "Existed_Raid", 00:06:08.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:08.587 "strip_size_kb": 0, 00:06:08.587 "state": "configuring", 00:06:08.587 "raid_level": "raid1", 00:06:08.587 "superblock": false, 00:06:08.587 "num_base_bdevs": 2, 00:06:08.587 "num_base_bdevs_discovered": 1, 00:06:08.587 "num_base_bdevs_operational": 2, 00:06:08.587 "base_bdevs_list": [ 00:06:08.587 { 00:06:08.587 "name": "BaseBdev1", 00:06:08.587 "uuid": "89774267-3646-416c-8f91-0b112e105659", 00:06:08.587 "is_configured": true, 00:06:08.587 "data_offset": 0, 00:06:08.587 "data_size": 65536 00:06:08.587 }, 00:06:08.587 { 00:06:08.587 "name": "BaseBdev2", 00:06:08.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:08.587 "is_configured": false, 00:06:08.587 "data_offset": 0, 00:06:08.587 "data_size": 0 00:06:08.587 } 00:06:08.587 ] 00:06:08.587 }' 00:06:08.587 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:08.587 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.847 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:08.847 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.847 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.106 [2024-11-26 19:46:59.793722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:09.106 [2024-11-26 19:46:59.793780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:09.106 [2024-11-26 19:46:59.793787] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:09.106 [2024-11-26 19:46:59.794016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:09.106 [2024-11-26 19:46:59.794155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:09.106 [2024-11-26 19:46:59.794164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:09.106 [2024-11-26 19:46:59.794422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:09.106 BaseBdev2 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.106 [ 00:06:09.106 { 00:06:09.106 "name": "BaseBdev2", 00:06:09.106 "aliases": [ 00:06:09.106 "6e27abad-7a3d-428d-9b06-c1d92531b886" 00:06:09.106 ], 00:06:09.106 "product_name": "Malloc disk", 00:06:09.106 "block_size": 512, 00:06:09.106 "num_blocks": 65536, 00:06:09.106 "uuid": "6e27abad-7a3d-428d-9b06-c1d92531b886", 00:06:09.106 "assigned_rate_limits": { 00:06:09.106 "rw_ios_per_sec": 0, 00:06:09.106 "rw_mbytes_per_sec": 0, 00:06:09.106 "r_mbytes_per_sec": 0, 00:06:09.106 "w_mbytes_per_sec": 0 00:06:09.106 }, 00:06:09.106 "claimed": true, 00:06:09.106 "claim_type": "exclusive_write", 00:06:09.106 "zoned": false, 00:06:09.106 "supported_io_types": { 00:06:09.106 "read": true, 00:06:09.106 "write": true, 00:06:09.106 "unmap": true, 00:06:09.106 "flush": true, 00:06:09.106 "reset": true, 00:06:09.106 "nvme_admin": false, 00:06:09.106 "nvme_io": false, 00:06:09.106 "nvme_io_md": false, 00:06:09.106 "write_zeroes": true, 00:06:09.106 "zcopy": true, 00:06:09.106 "get_zone_info": false, 00:06:09.106 "zone_management": false, 00:06:09.106 "zone_append": false, 00:06:09.106 "compare": false, 00:06:09.106 "compare_and_write": false, 00:06:09.106 "abort": true, 00:06:09.106 "seek_hole": false, 00:06:09.106 "seek_data": false, 00:06:09.106 "copy": true, 00:06:09.106 "nvme_iov_md": false 00:06:09.106 }, 00:06:09.106 "memory_domains": [ 00:06:09.106 { 00:06:09.106 "dma_device_id": "system", 00:06:09.106 "dma_device_type": 1 00:06:09.106 }, 00:06:09.106 { 00:06:09.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.106 "dma_device_type": 2 00:06:09.106 } 00:06:09.106 ], 00:06:09.106 "driver_specific": {} 00:06:09.106 } 00:06:09.106 ] 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:09.106 "name": "Existed_Raid", 00:06:09.106 "uuid": "c4e90c67-59b1-4abd-8132-74a250c4a539", 00:06:09.106 "strip_size_kb": 0, 00:06:09.106 "state": "online", 00:06:09.106 "raid_level": "raid1", 00:06:09.106 "superblock": false, 00:06:09.106 "num_base_bdevs": 2, 00:06:09.106 "num_base_bdevs_discovered": 2, 00:06:09.106 "num_base_bdevs_operational": 2, 00:06:09.106 "base_bdevs_list": [ 00:06:09.106 { 00:06:09.106 "name": "BaseBdev1", 00:06:09.106 "uuid": "89774267-3646-416c-8f91-0b112e105659", 00:06:09.106 "is_configured": true, 00:06:09.106 "data_offset": 0, 00:06:09.106 "data_size": 65536 00:06:09.106 }, 00:06:09.106 { 00:06:09.106 "name": "BaseBdev2", 00:06:09.106 "uuid": "6e27abad-7a3d-428d-9b06-c1d92531b886", 00:06:09.106 "is_configured": true, 00:06:09.106 "data_offset": 0, 00:06:09.106 "data_size": 65536 00:06:09.106 } 00:06:09.106 ] 00:06:09.106 }' 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:09.106 19:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.365 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:09.365 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:09.365 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:09.365 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:09.365 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:09.365 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:09.365 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:09.365 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:09.365 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.365 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.365 [2024-11-26 19:47:00.130085] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:09.365 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.365 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:09.365 "name": "Existed_Raid", 00:06:09.365 "aliases": [ 00:06:09.365 "c4e90c67-59b1-4abd-8132-74a250c4a539" 00:06:09.365 ], 00:06:09.365 "product_name": "Raid Volume", 00:06:09.365 "block_size": 512, 00:06:09.365 "num_blocks": 65536, 00:06:09.365 "uuid": "c4e90c67-59b1-4abd-8132-74a250c4a539", 00:06:09.365 "assigned_rate_limits": { 00:06:09.365 "rw_ios_per_sec": 0, 00:06:09.365 "rw_mbytes_per_sec": 0, 00:06:09.365 "r_mbytes_per_sec": 0, 00:06:09.365 "w_mbytes_per_sec": 0 00:06:09.365 }, 00:06:09.365 "claimed": false, 00:06:09.365 "zoned": false, 00:06:09.365 "supported_io_types": { 00:06:09.365 "read": true, 00:06:09.365 "write": true, 00:06:09.365 "unmap": false, 00:06:09.365 "flush": false, 00:06:09.365 "reset": true, 00:06:09.365 "nvme_admin": false, 00:06:09.365 "nvme_io": false, 00:06:09.365 "nvme_io_md": false, 00:06:09.365 "write_zeroes": true, 00:06:09.365 "zcopy": false, 00:06:09.365 "get_zone_info": false, 00:06:09.365 "zone_management": false, 00:06:09.365 "zone_append": false, 00:06:09.365 "compare": false, 00:06:09.365 "compare_and_write": false, 00:06:09.365 "abort": false, 00:06:09.365 "seek_hole": false, 00:06:09.365 "seek_data": false, 00:06:09.365 "copy": false, 00:06:09.365 "nvme_iov_md": false 00:06:09.365 }, 00:06:09.365 "memory_domains": [ 00:06:09.365 { 00:06:09.365 "dma_device_id": "system", 00:06:09.365 "dma_device_type": 1 00:06:09.365 }, 00:06:09.365 { 00:06:09.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.365 "dma_device_type": 2 00:06:09.365 }, 00:06:09.365 { 00:06:09.365 "dma_device_id": "system", 00:06:09.365 "dma_device_type": 1 00:06:09.365 }, 00:06:09.365 { 00:06:09.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.365 "dma_device_type": 2 00:06:09.365 } 00:06:09.365 ], 00:06:09.365 "driver_specific": { 00:06:09.365 "raid": { 00:06:09.365 "uuid": "c4e90c67-59b1-4abd-8132-74a250c4a539", 00:06:09.365 "strip_size_kb": 0, 00:06:09.365 "state": "online", 00:06:09.365 "raid_level": "raid1", 00:06:09.365 "superblock": false, 00:06:09.365 "num_base_bdevs": 2, 00:06:09.365 "num_base_bdevs_discovered": 2, 00:06:09.365 "num_base_bdevs_operational": 2, 00:06:09.365 "base_bdevs_list": [ 00:06:09.365 { 00:06:09.365 "name": "BaseBdev1", 00:06:09.365 "uuid": "89774267-3646-416c-8f91-0b112e105659", 00:06:09.365 "is_configured": true, 00:06:09.365 "data_offset": 0, 00:06:09.366 "data_size": 65536 00:06:09.366 }, 00:06:09.366 { 00:06:09.366 "name": "BaseBdev2", 00:06:09.366 "uuid": "6e27abad-7a3d-428d-9b06-c1d92531b886", 00:06:09.366 "is_configured": true, 00:06:09.366 "data_offset": 0, 00:06:09.366 "data_size": 65536 00:06:09.366 } 00:06:09.366 ] 00:06:09.366 } 00:06:09.366 } 00:06:09.366 }' 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:09.366 BaseBdev2' 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.366 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.624 [2024-11-26 19:47:00.322000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:09.624 "name": "Existed_Raid", 00:06:09.624 "uuid": "c4e90c67-59b1-4abd-8132-74a250c4a539", 00:06:09.624 "strip_size_kb": 0, 00:06:09.624 "state": "online", 00:06:09.624 "raid_level": "raid1", 00:06:09.624 "superblock": false, 00:06:09.624 "num_base_bdevs": 2, 00:06:09.624 "num_base_bdevs_discovered": 1, 00:06:09.624 "num_base_bdevs_operational": 1, 00:06:09.624 "base_bdevs_list": [ 00:06:09.624 { 00:06:09.624 "name": null, 00:06:09.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:09.624 "is_configured": false, 00:06:09.624 "data_offset": 0, 00:06:09.624 "data_size": 65536 00:06:09.624 }, 00:06:09.624 { 00:06:09.624 "name": "BaseBdev2", 00:06:09.624 "uuid": "6e27abad-7a3d-428d-9b06-c1d92531b886", 00:06:09.624 "is_configured": true, 00:06:09.624 "data_offset": 0, 00:06:09.624 "data_size": 65536 00:06:09.624 } 00:06:09.624 ] 00:06:09.624 }' 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:09.624 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.883 [2024-11-26 19:47:00.763254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:09.883 [2024-11-26 19:47:00.763362] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:09.883 [2024-11-26 19:47:00.814108] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:09.883 [2024-11-26 19:47:00.814317] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:09.883 [2024-11-26 19:47:00.814416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:09.883 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61341 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61341 ']' 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61341 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61341 00:06:10.142 killing process with pid 61341 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61341' 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61341 00:06:10.142 [2024-11-26 19:47:00.863960] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:10.142 19:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61341 00:06:10.142 [2024-11-26 19:47:00.872915] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:10.708 ************************************ 00:06:10.708 END TEST raid_state_function_test 00:06:10.708 ************************************ 00:06:10.708 19:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:10.708 00:06:10.708 real 0m3.693s 00:06:10.708 user 0m5.370s 00:06:10.708 sys 0m0.618s 00:06:10.708 19:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.708 19:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.708 19:47:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:06:10.708 19:47:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:10.708 19:47:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.708 19:47:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:10.708 ************************************ 00:06:10.708 START TEST raid_state_function_test_sb 00:06:10.708 ************************************ 00:06:10.708 19:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:06:10.708 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:10.709 Process raid pid: 61578 00:06:10.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61578 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61578' 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61578 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61578 ']' 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:10.709 19:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:10.709 [2024-11-26 19:47:01.642129] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:10.709 [2024-11-26 19:47:01.642317] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:10.968 [2024-11-26 19:47:01.818815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.305 [2024-11-26 19:47:01.926483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.305 [2024-11-26 19:47:02.070588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:11.305 [2024-11-26 19:47:02.070626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:11.872 [2024-11-26 19:47:02.531862] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:11.872 [2024-11-26 19:47:02.531922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:11.872 [2024-11-26 19:47:02.531932] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:11.872 [2024-11-26 19:47:02.531943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:11.872 "name": "Existed_Raid", 00:06:11.872 "uuid": "7d1b13d6-aeec-47b3-a38e-e9d069f8c0e6", 00:06:11.872 "strip_size_kb": 0, 00:06:11.872 "state": "configuring", 00:06:11.872 "raid_level": "raid1", 00:06:11.872 "superblock": true, 00:06:11.872 "num_base_bdevs": 2, 00:06:11.872 "num_base_bdevs_discovered": 0, 00:06:11.872 "num_base_bdevs_operational": 2, 00:06:11.872 "base_bdevs_list": [ 00:06:11.872 { 00:06:11.872 "name": "BaseBdev1", 00:06:11.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:11.872 "is_configured": false, 00:06:11.872 "data_offset": 0, 00:06:11.872 "data_size": 0 00:06:11.872 }, 00:06:11.872 { 00:06:11.872 "name": "BaseBdev2", 00:06:11.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:11.872 "is_configured": false, 00:06:11.872 "data_offset": 0, 00:06:11.872 "data_size": 0 00:06:11.872 } 00:06:11.872 ] 00:06:11.872 }' 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:11.872 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.131 [2024-11-26 19:47:02.851877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:12.131 [2024-11-26 19:47:02.851916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.131 [2024-11-26 19:47:02.859882] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:12.131 [2024-11-26 19:47:02.859924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:12.131 [2024-11-26 19:47:02.859932] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:12.131 [2024-11-26 19:47:02.859944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.131 [2024-11-26 19:47:02.893286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:12.131 BaseBdev1 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.131 [ 00:06:12.131 { 00:06:12.131 "name": "BaseBdev1", 00:06:12.131 "aliases": [ 00:06:12.131 "5addcef6-586c-4358-9ca3-319521e80bbe" 00:06:12.131 ], 00:06:12.131 "product_name": "Malloc disk", 00:06:12.131 "block_size": 512, 00:06:12.131 "num_blocks": 65536, 00:06:12.131 "uuid": "5addcef6-586c-4358-9ca3-319521e80bbe", 00:06:12.131 "assigned_rate_limits": { 00:06:12.131 "rw_ios_per_sec": 0, 00:06:12.131 "rw_mbytes_per_sec": 0, 00:06:12.131 "r_mbytes_per_sec": 0, 00:06:12.131 "w_mbytes_per_sec": 0 00:06:12.131 }, 00:06:12.131 "claimed": true, 00:06:12.131 "claim_type": "exclusive_write", 00:06:12.131 "zoned": false, 00:06:12.131 "supported_io_types": { 00:06:12.131 "read": true, 00:06:12.131 "write": true, 00:06:12.131 "unmap": true, 00:06:12.131 "flush": true, 00:06:12.131 "reset": true, 00:06:12.131 "nvme_admin": false, 00:06:12.131 "nvme_io": false, 00:06:12.131 "nvme_io_md": false, 00:06:12.131 "write_zeroes": true, 00:06:12.131 "zcopy": true, 00:06:12.131 "get_zone_info": false, 00:06:12.131 "zone_management": false, 00:06:12.131 "zone_append": false, 00:06:12.131 "compare": false, 00:06:12.131 "compare_and_write": false, 00:06:12.131 "abort": true, 00:06:12.131 "seek_hole": false, 00:06:12.131 "seek_data": false, 00:06:12.131 "copy": true, 00:06:12.131 "nvme_iov_md": false 00:06:12.131 }, 00:06:12.131 "memory_domains": [ 00:06:12.131 { 00:06:12.131 "dma_device_id": "system", 00:06:12.131 "dma_device_type": 1 00:06:12.131 }, 00:06:12.131 { 00:06:12.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.131 "dma_device_type": 2 00:06:12.131 } 00:06:12.131 ], 00:06:12.131 "driver_specific": {} 00:06:12.131 } 00:06:12.131 ] 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:12.131 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.132 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.132 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.132 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:12.132 "name": "Existed_Raid", 00:06:12.132 "uuid": "b1578125-3ec5-4e03-9d83-c2b3b35a3d62", 00:06:12.132 "strip_size_kb": 0, 00:06:12.132 "state": "configuring", 00:06:12.132 "raid_level": "raid1", 00:06:12.132 "superblock": true, 00:06:12.132 "num_base_bdevs": 2, 00:06:12.132 "num_base_bdevs_discovered": 1, 00:06:12.132 "num_base_bdevs_operational": 2, 00:06:12.132 "base_bdevs_list": [ 00:06:12.132 { 00:06:12.132 "name": "BaseBdev1", 00:06:12.132 "uuid": "5addcef6-586c-4358-9ca3-319521e80bbe", 00:06:12.132 "is_configured": true, 00:06:12.132 "data_offset": 2048, 00:06:12.132 "data_size": 63488 00:06:12.132 }, 00:06:12.132 { 00:06:12.132 "name": "BaseBdev2", 00:06:12.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:12.132 "is_configured": false, 00:06:12.132 "data_offset": 0, 00:06:12.132 "data_size": 0 00:06:12.132 } 00:06:12.132 ] 00:06:12.132 }' 00:06:12.132 19:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:12.132 19:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.390 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:12.390 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.390 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.390 [2024-11-26 19:47:03.253439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:12.390 [2024-11-26 19:47:03.253586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:12.390 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.390 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:12.390 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.390 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.390 [2024-11-26 19:47:03.261487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:12.390 [2024-11-26 19:47:03.263439] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:12.390 [2024-11-26 19:47:03.263480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:12.390 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.390 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:12.390 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:12.390 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:12.390 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:12.390 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:12.391 "name": "Existed_Raid", 00:06:12.391 "uuid": "27050c4c-769e-48e9-896d-8bebb370011b", 00:06:12.391 "strip_size_kb": 0, 00:06:12.391 "state": "configuring", 00:06:12.391 "raid_level": "raid1", 00:06:12.391 "superblock": true, 00:06:12.391 "num_base_bdevs": 2, 00:06:12.391 "num_base_bdevs_discovered": 1, 00:06:12.391 "num_base_bdevs_operational": 2, 00:06:12.391 "base_bdevs_list": [ 00:06:12.391 { 00:06:12.391 "name": "BaseBdev1", 00:06:12.391 "uuid": "5addcef6-586c-4358-9ca3-319521e80bbe", 00:06:12.391 "is_configured": true, 00:06:12.391 "data_offset": 2048, 00:06:12.391 "data_size": 63488 00:06:12.391 }, 00:06:12.391 { 00:06:12.391 "name": "BaseBdev2", 00:06:12.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:12.391 "is_configured": false, 00:06:12.391 "data_offset": 0, 00:06:12.391 "data_size": 0 00:06:12.391 } 00:06:12.391 ] 00:06:12.391 }' 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:12.391 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.957 [2024-11-26 19:47:03.632590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:12.957 [2024-11-26 19:47:03.632832] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:12.957 [2024-11-26 19:47:03.632845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:12.957 [2024-11-26 19:47:03.633101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:12.957 BaseBdev2 00:06:12.957 [2024-11-26 19:47:03.633242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:12.957 [2024-11-26 19:47:03.633255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:12.957 [2024-11-26 19:47:03.633403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.957 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.957 [ 00:06:12.957 { 00:06:12.957 "name": "BaseBdev2", 00:06:12.957 "aliases": [ 00:06:12.957 "dbe0e9f8-1cb4-4162-95ed-d7eaf7e1bc4a" 00:06:12.957 ], 00:06:12.957 "product_name": "Malloc disk", 00:06:12.957 "block_size": 512, 00:06:12.957 "num_blocks": 65536, 00:06:12.957 "uuid": "dbe0e9f8-1cb4-4162-95ed-d7eaf7e1bc4a", 00:06:12.957 "assigned_rate_limits": { 00:06:12.957 "rw_ios_per_sec": 0, 00:06:12.957 "rw_mbytes_per_sec": 0, 00:06:12.957 "r_mbytes_per_sec": 0, 00:06:12.957 "w_mbytes_per_sec": 0 00:06:12.957 }, 00:06:12.957 "claimed": true, 00:06:12.957 "claim_type": "exclusive_write", 00:06:12.957 "zoned": false, 00:06:12.957 "supported_io_types": { 00:06:12.957 "read": true, 00:06:12.957 "write": true, 00:06:12.957 "unmap": true, 00:06:12.957 "flush": true, 00:06:12.958 "reset": true, 00:06:12.958 "nvme_admin": false, 00:06:12.958 "nvme_io": false, 00:06:12.958 "nvme_io_md": false, 00:06:12.958 "write_zeroes": true, 00:06:12.958 "zcopy": true, 00:06:12.958 "get_zone_info": false, 00:06:12.958 "zone_management": false, 00:06:12.958 "zone_append": false, 00:06:12.958 "compare": false, 00:06:12.958 "compare_and_write": false, 00:06:12.958 "abort": true, 00:06:12.958 "seek_hole": false, 00:06:12.958 "seek_data": false, 00:06:12.958 "copy": true, 00:06:12.958 "nvme_iov_md": false 00:06:12.958 }, 00:06:12.958 "memory_domains": [ 00:06:12.958 { 00:06:12.958 "dma_device_id": "system", 00:06:12.958 "dma_device_type": 1 00:06:12.958 }, 00:06:12.958 { 00:06:12.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.958 "dma_device_type": 2 00:06:12.958 } 00:06:12.958 ], 00:06:12.958 "driver_specific": {} 00:06:12.958 } 00:06:12.958 ] 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:12.958 "name": "Existed_Raid", 00:06:12.958 "uuid": "27050c4c-769e-48e9-896d-8bebb370011b", 00:06:12.958 "strip_size_kb": 0, 00:06:12.958 "state": "online", 00:06:12.958 "raid_level": "raid1", 00:06:12.958 "superblock": true, 00:06:12.958 "num_base_bdevs": 2, 00:06:12.958 "num_base_bdevs_discovered": 2, 00:06:12.958 "num_base_bdevs_operational": 2, 00:06:12.958 "base_bdevs_list": [ 00:06:12.958 { 00:06:12.958 "name": "BaseBdev1", 00:06:12.958 "uuid": "5addcef6-586c-4358-9ca3-319521e80bbe", 00:06:12.958 "is_configured": true, 00:06:12.958 "data_offset": 2048, 00:06:12.958 "data_size": 63488 00:06:12.958 }, 00:06:12.958 { 00:06:12.958 "name": "BaseBdev2", 00:06:12.958 "uuid": "dbe0e9f8-1cb4-4162-95ed-d7eaf7e1bc4a", 00:06:12.958 "is_configured": true, 00:06:12.958 "data_offset": 2048, 00:06:12.958 "data_size": 63488 00:06:12.958 } 00:06:12.958 ] 00:06:12.958 }' 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:12.958 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:13.217 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:13.217 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:13.217 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:13.217 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:13.217 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:13.217 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:13.217 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:13.217 19:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:13.217 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.217 19:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:13.217 [2024-11-26 19:47:03.989010] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:13.217 "name": "Existed_Raid", 00:06:13.217 "aliases": [ 00:06:13.217 "27050c4c-769e-48e9-896d-8bebb370011b" 00:06:13.217 ], 00:06:13.217 "product_name": "Raid Volume", 00:06:13.217 "block_size": 512, 00:06:13.217 "num_blocks": 63488, 00:06:13.217 "uuid": "27050c4c-769e-48e9-896d-8bebb370011b", 00:06:13.217 "assigned_rate_limits": { 00:06:13.217 "rw_ios_per_sec": 0, 00:06:13.217 "rw_mbytes_per_sec": 0, 00:06:13.217 "r_mbytes_per_sec": 0, 00:06:13.217 "w_mbytes_per_sec": 0 00:06:13.217 }, 00:06:13.217 "claimed": false, 00:06:13.217 "zoned": false, 00:06:13.217 "supported_io_types": { 00:06:13.217 "read": true, 00:06:13.217 "write": true, 00:06:13.217 "unmap": false, 00:06:13.217 "flush": false, 00:06:13.217 "reset": true, 00:06:13.217 "nvme_admin": false, 00:06:13.217 "nvme_io": false, 00:06:13.217 "nvme_io_md": false, 00:06:13.217 "write_zeroes": true, 00:06:13.217 "zcopy": false, 00:06:13.217 "get_zone_info": false, 00:06:13.217 "zone_management": false, 00:06:13.217 "zone_append": false, 00:06:13.217 "compare": false, 00:06:13.217 "compare_and_write": false, 00:06:13.217 "abort": false, 00:06:13.217 "seek_hole": false, 00:06:13.217 "seek_data": false, 00:06:13.217 "copy": false, 00:06:13.217 "nvme_iov_md": false 00:06:13.217 }, 00:06:13.217 "memory_domains": [ 00:06:13.217 { 00:06:13.217 "dma_device_id": "system", 00:06:13.217 "dma_device_type": 1 00:06:13.217 }, 00:06:13.217 { 00:06:13.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.217 "dma_device_type": 2 00:06:13.217 }, 00:06:13.217 { 00:06:13.217 "dma_device_id": "system", 00:06:13.217 "dma_device_type": 1 00:06:13.217 }, 00:06:13.217 { 00:06:13.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.217 "dma_device_type": 2 00:06:13.217 } 00:06:13.217 ], 00:06:13.217 "driver_specific": { 00:06:13.217 "raid": { 00:06:13.217 "uuid": "27050c4c-769e-48e9-896d-8bebb370011b", 00:06:13.217 "strip_size_kb": 0, 00:06:13.217 "state": "online", 00:06:13.217 "raid_level": "raid1", 00:06:13.217 "superblock": true, 00:06:13.217 "num_base_bdevs": 2, 00:06:13.217 "num_base_bdevs_discovered": 2, 00:06:13.217 "num_base_bdevs_operational": 2, 00:06:13.217 "base_bdevs_list": [ 00:06:13.217 { 00:06:13.217 "name": "BaseBdev1", 00:06:13.217 "uuid": "5addcef6-586c-4358-9ca3-319521e80bbe", 00:06:13.217 "is_configured": true, 00:06:13.217 "data_offset": 2048, 00:06:13.217 "data_size": 63488 00:06:13.217 }, 00:06:13.217 { 00:06:13.217 "name": "BaseBdev2", 00:06:13.217 "uuid": "dbe0e9f8-1cb4-4162-95ed-d7eaf7e1bc4a", 00:06:13.217 "is_configured": true, 00:06:13.217 "data_offset": 2048, 00:06:13.217 "data_size": 63488 00:06:13.217 } 00:06:13.217 ] 00:06:13.217 } 00:06:13.217 } 00:06:13.217 }' 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:13.217 BaseBdev2' 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:13.217 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:13.479 [2024-11-26 19:47:04.168812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:13.479 "name": "Existed_Raid", 00:06:13.479 "uuid": "27050c4c-769e-48e9-896d-8bebb370011b", 00:06:13.479 "strip_size_kb": 0, 00:06:13.479 "state": "online", 00:06:13.479 "raid_level": "raid1", 00:06:13.479 "superblock": true, 00:06:13.479 "num_base_bdevs": 2, 00:06:13.479 "num_base_bdevs_discovered": 1, 00:06:13.479 "num_base_bdevs_operational": 1, 00:06:13.479 "base_bdevs_list": [ 00:06:13.479 { 00:06:13.479 "name": null, 00:06:13.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:13.479 "is_configured": false, 00:06:13.479 "data_offset": 0, 00:06:13.479 "data_size": 63488 00:06:13.479 }, 00:06:13.479 { 00:06:13.479 "name": "BaseBdev2", 00:06:13.479 "uuid": "dbe0e9f8-1cb4-4162-95ed-d7eaf7e1bc4a", 00:06:13.479 "is_configured": true, 00:06:13.479 "data_offset": 2048, 00:06:13.479 "data_size": 63488 00:06:13.479 } 00:06:13.479 ] 00:06:13.479 }' 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:13.479 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:13.740 [2024-11-26 19:47:04.587298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:13.740 [2024-11-26 19:47:04.587441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:13.740 [2024-11-26 19:47:04.651969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:13.740 [2024-11-26 19:47:04.652041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:13.740 [2024-11-26 19:47:04.652053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:13.740 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:13.741 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.001 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:14.001 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:14.001 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:14.001 19:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61578 00:06:14.001 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61578 ']' 00:06:14.001 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61578 00:06:14.001 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:06:14.001 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.001 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61578 00:06:14.001 killing process with pid 61578 00:06:14.001 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.001 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.001 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61578' 00:06:14.001 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61578 00:06:14.001 [2024-11-26 19:47:04.709746] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:14.001 19:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61578 00:06:14.001 [2024-11-26 19:47:04.720938] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:14.572 ************************************ 00:06:14.572 END TEST raid_state_function_test_sb 00:06:14.572 ************************************ 00:06:14.572 19:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:14.572 00:06:14.572 real 0m3.955s 00:06:14.572 user 0m5.643s 00:06:14.572 sys 0m0.666s 00:06:14.572 19:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.572 19:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:14.832 19:47:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:06:14.832 19:47:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:14.832 19:47:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.832 19:47:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:14.832 ************************************ 00:06:14.832 START TEST raid_superblock_test 00:06:14.832 ************************************ 00:06:14.832 19:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:06:14.832 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:06:14.832 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:14.832 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:14.832 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:14.832 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:14.832 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:14.832 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:14.832 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:14.832 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:14.832 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:14.832 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:14.832 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:14.832 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:14.833 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:06:14.833 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:06:14.833 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61819 00:06:14.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.833 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61819 00:06:14.833 19:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61819 ']' 00:06:14.833 19:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.833 19:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.833 19:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.833 19:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.833 19:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.833 19:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:14.833 [2024-11-26 19:47:05.645570] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:14.833 [2024-11-26 19:47:05.645778] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61819 ] 00:06:15.093 [2024-11-26 19:47:05.814201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.093 [2024-11-26 19:47:05.933436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.351 [2024-11-26 19:47:06.081798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:15.351 [2024-11-26 19:47:06.081852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:15.613 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.613 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:15.613 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:15.613 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:15.613 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:15.613 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:15.613 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:15.613 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.614 malloc1 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.614 [2024-11-26 19:47:06.525292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:15.614 [2024-11-26 19:47:06.525398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:15.614 [2024-11-26 19:47:06.525437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:15.614 [2024-11-26 19:47:06.525455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:15.614 [2024-11-26 19:47:06.528652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:15.614 [2024-11-26 19:47:06.528707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:15.614 pt1 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.614 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.873 malloc2 00:06:15.873 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.873 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:15.873 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.873 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.873 [2024-11-26 19:47:06.582420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:15.873 [2024-11-26 19:47:06.582631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:15.873 [2024-11-26 19:47:06.582680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:15.873 [2024-11-26 19:47:06.582697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:15.873 [2024-11-26 19:47:06.585901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:15.873 pt2 00:06:15.873 [2024-11-26 19:47:06.586064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:15.873 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.873 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:15.873 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:15.873 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:15.873 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.873 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.873 [2024-11-26 19:47:06.590497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:15.873 [2024-11-26 19:47:06.593386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:15.874 [2024-11-26 19:47:06.593758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:15.874 [2024-11-26 19:47:06.593885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:15.874 [2024-11-26 19:47:06.594356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:15.874 [2024-11-26 19:47:06.594679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:15.874 [2024-11-26 19:47:06.594781] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:06:15.874 [2024-11-26 19:47:06.595173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:15.874 "name": "raid_bdev1", 00:06:15.874 "uuid": "94f6010c-d08a-4bc1-88f9-8891ea54d8a4", 00:06:15.874 "strip_size_kb": 0, 00:06:15.874 "state": "online", 00:06:15.874 "raid_level": "raid1", 00:06:15.874 "superblock": true, 00:06:15.874 "num_base_bdevs": 2, 00:06:15.874 "num_base_bdevs_discovered": 2, 00:06:15.874 "num_base_bdevs_operational": 2, 00:06:15.874 "base_bdevs_list": [ 00:06:15.874 { 00:06:15.874 "name": "pt1", 00:06:15.874 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:15.874 "is_configured": true, 00:06:15.874 "data_offset": 2048, 00:06:15.874 "data_size": 63488 00:06:15.874 }, 00:06:15.874 { 00:06:15.874 "name": "pt2", 00:06:15.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:15.874 "is_configured": true, 00:06:15.874 "data_offset": 2048, 00:06:15.874 "data_size": 63488 00:06:15.874 } 00:06:15.874 ] 00:06:15.874 }' 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:15.874 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.135 [2024-11-26 19:47:06.923582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:16.135 "name": "raid_bdev1", 00:06:16.135 "aliases": [ 00:06:16.135 "94f6010c-d08a-4bc1-88f9-8891ea54d8a4" 00:06:16.135 ], 00:06:16.135 "product_name": "Raid Volume", 00:06:16.135 "block_size": 512, 00:06:16.135 "num_blocks": 63488, 00:06:16.135 "uuid": "94f6010c-d08a-4bc1-88f9-8891ea54d8a4", 00:06:16.135 "assigned_rate_limits": { 00:06:16.135 "rw_ios_per_sec": 0, 00:06:16.135 "rw_mbytes_per_sec": 0, 00:06:16.135 "r_mbytes_per_sec": 0, 00:06:16.135 "w_mbytes_per_sec": 0 00:06:16.135 }, 00:06:16.135 "claimed": false, 00:06:16.135 "zoned": false, 00:06:16.135 "supported_io_types": { 00:06:16.135 "read": true, 00:06:16.135 "write": true, 00:06:16.135 "unmap": false, 00:06:16.135 "flush": false, 00:06:16.135 "reset": true, 00:06:16.135 "nvme_admin": false, 00:06:16.135 "nvme_io": false, 00:06:16.135 "nvme_io_md": false, 00:06:16.135 "write_zeroes": true, 00:06:16.135 "zcopy": false, 00:06:16.135 "get_zone_info": false, 00:06:16.135 "zone_management": false, 00:06:16.135 "zone_append": false, 00:06:16.135 "compare": false, 00:06:16.135 "compare_and_write": false, 00:06:16.135 "abort": false, 00:06:16.135 "seek_hole": false, 00:06:16.135 "seek_data": false, 00:06:16.135 "copy": false, 00:06:16.135 "nvme_iov_md": false 00:06:16.135 }, 00:06:16.135 "memory_domains": [ 00:06:16.135 { 00:06:16.135 "dma_device_id": "system", 00:06:16.135 "dma_device_type": 1 00:06:16.135 }, 00:06:16.135 { 00:06:16.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.135 "dma_device_type": 2 00:06:16.135 }, 00:06:16.135 { 00:06:16.135 "dma_device_id": "system", 00:06:16.135 "dma_device_type": 1 00:06:16.135 }, 00:06:16.135 { 00:06:16.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.135 "dma_device_type": 2 00:06:16.135 } 00:06:16.135 ], 00:06:16.135 "driver_specific": { 00:06:16.135 "raid": { 00:06:16.135 "uuid": "94f6010c-d08a-4bc1-88f9-8891ea54d8a4", 00:06:16.135 "strip_size_kb": 0, 00:06:16.135 "state": "online", 00:06:16.135 "raid_level": "raid1", 00:06:16.135 "superblock": true, 00:06:16.135 "num_base_bdevs": 2, 00:06:16.135 "num_base_bdevs_discovered": 2, 00:06:16.135 "num_base_bdevs_operational": 2, 00:06:16.135 "base_bdevs_list": [ 00:06:16.135 { 00:06:16.135 "name": "pt1", 00:06:16.135 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:16.135 "is_configured": true, 00:06:16.135 "data_offset": 2048, 00:06:16.135 "data_size": 63488 00:06:16.135 }, 00:06:16.135 { 00:06:16.135 "name": "pt2", 00:06:16.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:16.135 "is_configured": true, 00:06:16.135 "data_offset": 2048, 00:06:16.135 "data_size": 63488 00:06:16.135 } 00:06:16.135 ] 00:06:16.135 } 00:06:16.135 } 00:06:16.135 }' 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:16.135 pt2' 00:06:16.135 19:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.135 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.396 [2024-11-26 19:47:07.087535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=94f6010c-d08a-4bc1-88f9-8891ea54d8a4 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 94f6010c-d08a-4bc1-88f9-8891ea54d8a4 ']' 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.396 [2024-11-26 19:47:07.115169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:16.396 [2024-11-26 19:47:07.115277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:16.396 [2024-11-26 19:47:07.115395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:16.396 [2024-11-26 19:47:07.115465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:16.396 [2024-11-26 19:47:07.115477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.396 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.397 [2024-11-26 19:47:07.207233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:16.397 [2024-11-26 19:47:07.209311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:16.397 [2024-11-26 19:47:07.209399] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:16.397 [2024-11-26 19:47:07.209456] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:16.397 [2024-11-26 19:47:07.209472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:16.397 [2024-11-26 19:47:07.209497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:06:16.397 request: 00:06:16.397 { 00:06:16.397 "name": "raid_bdev1", 00:06:16.397 "raid_level": "raid1", 00:06:16.397 "base_bdevs": [ 00:06:16.397 "malloc1", 00:06:16.397 "malloc2" 00:06:16.397 ], 00:06:16.397 "superblock": false, 00:06:16.397 "method": "bdev_raid_create", 00:06:16.397 "req_id": 1 00:06:16.397 } 00:06:16.397 Got JSON-RPC error response 00:06:16.397 response: 00:06:16.397 { 00:06:16.397 "code": -17, 00:06:16.397 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:16.397 } 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.397 [2024-11-26 19:47:07.251223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:16.397 [2024-11-26 19:47:07.251287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:16.397 [2024-11-26 19:47:07.251308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:16.397 [2024-11-26 19:47:07.251319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:16.397 [2024-11-26 19:47:07.253729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:16.397 [2024-11-26 19:47:07.253768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:16.397 [2024-11-26 19:47:07.253863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:16.397 [2024-11-26 19:47:07.253920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:16.397 pt1 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:16.397 "name": "raid_bdev1", 00:06:16.397 "uuid": "94f6010c-d08a-4bc1-88f9-8891ea54d8a4", 00:06:16.397 "strip_size_kb": 0, 00:06:16.397 "state": "configuring", 00:06:16.397 "raid_level": "raid1", 00:06:16.397 "superblock": true, 00:06:16.397 "num_base_bdevs": 2, 00:06:16.397 "num_base_bdevs_discovered": 1, 00:06:16.397 "num_base_bdevs_operational": 2, 00:06:16.397 "base_bdevs_list": [ 00:06:16.397 { 00:06:16.397 "name": "pt1", 00:06:16.397 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:16.397 "is_configured": true, 00:06:16.397 "data_offset": 2048, 00:06:16.397 "data_size": 63488 00:06:16.397 }, 00:06:16.397 { 00:06:16.397 "name": null, 00:06:16.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:16.397 "is_configured": false, 00:06:16.397 "data_offset": 2048, 00:06:16.397 "data_size": 63488 00:06:16.397 } 00:06:16.397 ] 00:06:16.397 }' 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:16.397 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.655 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:16.655 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.656 [2024-11-26 19:47:07.583319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:16.656 [2024-11-26 19:47:07.583412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:16.656 [2024-11-26 19:47:07.583434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:06:16.656 [2024-11-26 19:47:07.583446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:16.656 [2024-11-26 19:47:07.583925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:16.656 [2024-11-26 19:47:07.583947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:16.656 [2024-11-26 19:47:07.584033] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:16.656 [2024-11-26 19:47:07.584061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:16.656 [2024-11-26 19:47:07.584177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:16.656 [2024-11-26 19:47:07.584189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:16.656 [2024-11-26 19:47:07.584462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:16.656 [2024-11-26 19:47:07.584602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:16.656 [2024-11-26 19:47:07.584611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:16.656 [2024-11-26 19:47:07.584748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:16.656 pt2 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:16.656 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:16.914 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:16.914 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:16.914 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.914 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.914 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.914 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:16.914 "name": "raid_bdev1", 00:06:16.914 "uuid": "94f6010c-d08a-4bc1-88f9-8891ea54d8a4", 00:06:16.914 "strip_size_kb": 0, 00:06:16.914 "state": "online", 00:06:16.914 "raid_level": "raid1", 00:06:16.914 "superblock": true, 00:06:16.914 "num_base_bdevs": 2, 00:06:16.914 "num_base_bdevs_discovered": 2, 00:06:16.914 "num_base_bdevs_operational": 2, 00:06:16.914 "base_bdevs_list": [ 00:06:16.914 { 00:06:16.914 "name": "pt1", 00:06:16.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:16.914 "is_configured": true, 00:06:16.914 "data_offset": 2048, 00:06:16.914 "data_size": 63488 00:06:16.914 }, 00:06:16.914 { 00:06:16.914 "name": "pt2", 00:06:16.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:16.914 "is_configured": true, 00:06:16.914 "data_offset": 2048, 00:06:16.914 "data_size": 63488 00:06:16.914 } 00:06:16.914 ] 00:06:16.914 }' 00:06:16.914 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:16.914 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.172 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:17.172 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:17.172 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:17.172 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:17.172 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:17.172 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:17.172 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:17.172 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.172 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.172 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:17.172 [2024-11-26 19:47:07.931667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:17.172 19:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.172 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:17.172 "name": "raid_bdev1", 00:06:17.172 "aliases": [ 00:06:17.172 "94f6010c-d08a-4bc1-88f9-8891ea54d8a4" 00:06:17.172 ], 00:06:17.172 "product_name": "Raid Volume", 00:06:17.172 "block_size": 512, 00:06:17.172 "num_blocks": 63488, 00:06:17.172 "uuid": "94f6010c-d08a-4bc1-88f9-8891ea54d8a4", 00:06:17.172 "assigned_rate_limits": { 00:06:17.172 "rw_ios_per_sec": 0, 00:06:17.172 "rw_mbytes_per_sec": 0, 00:06:17.172 "r_mbytes_per_sec": 0, 00:06:17.172 "w_mbytes_per_sec": 0 00:06:17.172 }, 00:06:17.172 "claimed": false, 00:06:17.172 "zoned": false, 00:06:17.172 "supported_io_types": { 00:06:17.172 "read": true, 00:06:17.172 "write": true, 00:06:17.172 "unmap": false, 00:06:17.172 "flush": false, 00:06:17.172 "reset": true, 00:06:17.172 "nvme_admin": false, 00:06:17.172 "nvme_io": false, 00:06:17.172 "nvme_io_md": false, 00:06:17.172 "write_zeroes": true, 00:06:17.172 "zcopy": false, 00:06:17.172 "get_zone_info": false, 00:06:17.172 "zone_management": false, 00:06:17.172 "zone_append": false, 00:06:17.172 "compare": false, 00:06:17.172 "compare_and_write": false, 00:06:17.172 "abort": false, 00:06:17.172 "seek_hole": false, 00:06:17.172 "seek_data": false, 00:06:17.172 "copy": false, 00:06:17.172 "nvme_iov_md": false 00:06:17.172 }, 00:06:17.172 "memory_domains": [ 00:06:17.172 { 00:06:17.172 "dma_device_id": "system", 00:06:17.172 "dma_device_type": 1 00:06:17.172 }, 00:06:17.172 { 00:06:17.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.172 "dma_device_type": 2 00:06:17.172 }, 00:06:17.172 { 00:06:17.172 "dma_device_id": "system", 00:06:17.172 "dma_device_type": 1 00:06:17.172 }, 00:06:17.172 { 00:06:17.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.173 "dma_device_type": 2 00:06:17.173 } 00:06:17.173 ], 00:06:17.173 "driver_specific": { 00:06:17.173 "raid": { 00:06:17.173 "uuid": "94f6010c-d08a-4bc1-88f9-8891ea54d8a4", 00:06:17.173 "strip_size_kb": 0, 00:06:17.173 "state": "online", 00:06:17.173 "raid_level": "raid1", 00:06:17.173 "superblock": true, 00:06:17.173 "num_base_bdevs": 2, 00:06:17.173 "num_base_bdevs_discovered": 2, 00:06:17.173 "num_base_bdevs_operational": 2, 00:06:17.173 "base_bdevs_list": [ 00:06:17.173 { 00:06:17.173 "name": "pt1", 00:06:17.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:17.173 "is_configured": true, 00:06:17.173 "data_offset": 2048, 00:06:17.173 "data_size": 63488 00:06:17.173 }, 00:06:17.173 { 00:06:17.173 "name": "pt2", 00:06:17.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:17.173 "is_configured": true, 00:06:17.173 "data_offset": 2048, 00:06:17.173 "data_size": 63488 00:06:17.173 } 00:06:17.173 ] 00:06:17.173 } 00:06:17.173 } 00:06:17.173 }' 00:06:17.173 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:17.173 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:17.173 pt2' 00:06:17.173 19:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.173 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:17.173 [2024-11-26 19:47:08.095700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 94f6010c-d08a-4bc1-88f9-8891ea54d8a4 '!=' 94f6010c-d08a-4bc1-88f9-8891ea54d8a4 ']' 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.430 [2024-11-26 19:47:08.119463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:17.430 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.431 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.431 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.431 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:17.431 "name": "raid_bdev1", 00:06:17.431 "uuid": "94f6010c-d08a-4bc1-88f9-8891ea54d8a4", 00:06:17.431 "strip_size_kb": 0, 00:06:17.431 "state": "online", 00:06:17.431 "raid_level": "raid1", 00:06:17.431 "superblock": true, 00:06:17.431 "num_base_bdevs": 2, 00:06:17.431 "num_base_bdevs_discovered": 1, 00:06:17.431 "num_base_bdevs_operational": 1, 00:06:17.431 "base_bdevs_list": [ 00:06:17.431 { 00:06:17.431 "name": null, 00:06:17.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:17.431 "is_configured": false, 00:06:17.431 "data_offset": 0, 00:06:17.431 "data_size": 63488 00:06:17.431 }, 00:06:17.431 { 00:06:17.431 "name": "pt2", 00:06:17.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:17.431 "is_configured": true, 00:06:17.431 "data_offset": 2048, 00:06:17.431 "data_size": 63488 00:06:17.431 } 00:06:17.431 ] 00:06:17.431 }' 00:06:17.431 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:17.431 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.688 [2024-11-26 19:47:08.431511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:17.688 [2024-11-26 19:47:08.431541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:17.688 [2024-11-26 19:47:08.431624] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:17.688 [2024-11-26 19:47:08.431676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:17.688 [2024-11-26 19:47:08.431688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.688 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.688 [2024-11-26 19:47:08.479511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:17.688 [2024-11-26 19:47:08.479573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:17.688 [2024-11-26 19:47:08.479591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:06:17.688 [2024-11-26 19:47:08.479602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:17.689 [2024-11-26 19:47:08.482030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:17.689 [2024-11-26 19:47:08.482068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:17.689 [2024-11-26 19:47:08.482157] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:17.689 [2024-11-26 19:47:08.482207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:17.689 [2024-11-26 19:47:08.482310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:06:17.689 [2024-11-26 19:47:08.482323] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:17.689 [2024-11-26 19:47:08.482590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:17.689 [2024-11-26 19:47:08.482728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:06:17.689 [2024-11-26 19:47:08.482737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:06:17.689 [2024-11-26 19:47:08.482873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:17.689 pt2 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:17.689 "name": "raid_bdev1", 00:06:17.689 "uuid": "94f6010c-d08a-4bc1-88f9-8891ea54d8a4", 00:06:17.689 "strip_size_kb": 0, 00:06:17.689 "state": "online", 00:06:17.689 "raid_level": "raid1", 00:06:17.689 "superblock": true, 00:06:17.689 "num_base_bdevs": 2, 00:06:17.689 "num_base_bdevs_discovered": 1, 00:06:17.689 "num_base_bdevs_operational": 1, 00:06:17.689 "base_bdevs_list": [ 00:06:17.689 { 00:06:17.689 "name": null, 00:06:17.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:17.689 "is_configured": false, 00:06:17.689 "data_offset": 2048, 00:06:17.689 "data_size": 63488 00:06:17.689 }, 00:06:17.689 { 00:06:17.689 "name": "pt2", 00:06:17.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:17.689 "is_configured": true, 00:06:17.689 "data_offset": 2048, 00:06:17.689 "data_size": 63488 00:06:17.689 } 00:06:17.689 ] 00:06:17.689 }' 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:17.689 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.947 [2024-11-26 19:47:08.803575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:17.947 [2024-11-26 19:47:08.803615] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:17.947 [2024-11-26 19:47:08.803695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:17.947 [2024-11-26 19:47:08.803752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:17.947 [2024-11-26 19:47:08.803762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.947 [2024-11-26 19:47:08.847617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:17.947 [2024-11-26 19:47:08.847688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:17.947 [2024-11-26 19:47:08.847709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:06:17.947 [2024-11-26 19:47:08.847719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:17.947 [2024-11-26 19:47:08.850078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:17.947 [2024-11-26 19:47:08.850114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:17.947 [2024-11-26 19:47:08.850207] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:17.947 [2024-11-26 19:47:08.850251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:17.947 [2024-11-26 19:47:08.850407] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:06:17.947 [2024-11-26 19:47:08.850419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:17.947 [2024-11-26 19:47:08.850437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:06:17.947 [2024-11-26 19:47:08.850486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:17.947 [2024-11-26 19:47:08.850564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:06:17.947 [2024-11-26 19:47:08.850573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:17.947 [2024-11-26 19:47:08.850835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:06:17.947 [2024-11-26 19:47:08.850988] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:06:17.947 [2024-11-26 19:47:08.851000] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:06:17.947 [2024-11-26 19:47:08.851138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:17.947 pt1 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.947 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.948 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.206 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:18.206 "name": "raid_bdev1", 00:06:18.206 "uuid": "94f6010c-d08a-4bc1-88f9-8891ea54d8a4", 00:06:18.206 "strip_size_kb": 0, 00:06:18.206 "state": "online", 00:06:18.206 "raid_level": "raid1", 00:06:18.206 "superblock": true, 00:06:18.206 "num_base_bdevs": 2, 00:06:18.206 "num_base_bdevs_discovered": 1, 00:06:18.206 "num_base_bdevs_operational": 1, 00:06:18.206 "base_bdevs_list": [ 00:06:18.206 { 00:06:18.206 "name": null, 00:06:18.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:18.206 "is_configured": false, 00:06:18.206 "data_offset": 2048, 00:06:18.206 "data_size": 63488 00:06:18.206 }, 00:06:18.206 { 00:06:18.206 "name": "pt2", 00:06:18.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:18.206 "is_configured": true, 00:06:18.206 "data_offset": 2048, 00:06:18.206 "data_size": 63488 00:06:18.206 } 00:06:18.206 ] 00:06:18.206 }' 00:06:18.206 19:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:18.206 19:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 [2024-11-26 19:47:09.191941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 94f6010c-d08a-4bc1-88f9-8891ea54d8a4 '!=' 94f6010c-d08a-4bc1-88f9-8891ea54d8a4 ']' 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61819 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61819 ']' 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61819 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61819 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.463 killing process with pid 61819 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61819' 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61819 00:06:18.463 19:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61819 00:06:18.463 [2024-11-26 19:47:09.230878] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:18.463 [2024-11-26 19:47:09.230999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:18.463 [2024-11-26 19:47:09.231056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:18.463 [2024-11-26 19:47:09.231071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:06:18.463 [2024-11-26 19:47:09.372380] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:19.395 19:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:19.395 00:06:19.395 real 0m4.625s 00:06:19.395 user 0m6.839s 00:06:19.395 sys 0m0.799s 00:06:19.395 ************************************ 00:06:19.395 END TEST raid_superblock_test 00:06:19.395 ************************************ 00:06:19.395 19:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.395 19:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.395 19:47:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:06:19.395 19:47:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:19.395 19:47:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.395 19:47:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:19.395 ************************************ 00:06:19.395 START TEST raid_read_error_test 00:06:19.395 ************************************ 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ofCyA5adJX 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62129 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62129 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62129 ']' 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:19.395 19:47:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.395 [2024-11-26 19:47:10.294406] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:19.395 [2024-11-26 19:47:10.294533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62129 ] 00:06:19.717 [2024-11-26 19:47:10.456190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.717 [2024-11-26 19:47:10.579987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.974 [2024-11-26 19:47:10.735061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:19.974 [2024-11-26 19:47:10.735318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.541 BaseBdev1_malloc 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.541 true 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.541 [2024-11-26 19:47:11.257106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:20.541 [2024-11-26 19:47:11.257179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.541 [2024-11-26 19:47:11.257201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:20.541 [2024-11-26 19:47:11.257213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.541 [2024-11-26 19:47:11.259574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.541 [2024-11-26 19:47:11.259617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:20.541 BaseBdev1 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.541 BaseBdev2_malloc 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.541 true 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.541 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.541 [2024-11-26 19:47:11.305213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:20.541 [2024-11-26 19:47:11.305276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.542 [2024-11-26 19:47:11.305296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:20.542 [2024-11-26 19:47:11.305309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.542 [2024-11-26 19:47:11.307999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.542 [2024-11-26 19:47:11.308046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:20.542 BaseBdev2 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.542 [2024-11-26 19:47:11.313283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:20.542 [2024-11-26 19:47:11.315487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:20.542 [2024-11-26 19:47:11.315704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:20.542 [2024-11-26 19:47:11.315719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:20.542 [2024-11-26 19:47:11.316020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:20.542 [2024-11-26 19:47:11.316197] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:20.542 [2024-11-26 19:47:11.316207] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:20.542 [2024-11-26 19:47:11.316400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:20.542 "name": "raid_bdev1", 00:06:20.542 "uuid": "5cf5a612-6c37-4a24-ac4e-64ea6943f3d8", 00:06:20.542 "strip_size_kb": 0, 00:06:20.542 "state": "online", 00:06:20.542 "raid_level": "raid1", 00:06:20.542 "superblock": true, 00:06:20.542 "num_base_bdevs": 2, 00:06:20.542 "num_base_bdevs_discovered": 2, 00:06:20.542 "num_base_bdevs_operational": 2, 00:06:20.542 "base_bdevs_list": [ 00:06:20.542 { 00:06:20.542 "name": "BaseBdev1", 00:06:20.542 "uuid": "49bb94db-e6ca-5a36-90ba-6f7920f472ad", 00:06:20.542 "is_configured": true, 00:06:20.542 "data_offset": 2048, 00:06:20.542 "data_size": 63488 00:06:20.542 }, 00:06:20.542 { 00:06:20.542 "name": "BaseBdev2", 00:06:20.542 "uuid": "fa734b01-d595-53ed-88aa-f4a3f1798ef4", 00:06:20.542 "is_configured": true, 00:06:20.542 "data_offset": 2048, 00:06:20.542 "data_size": 63488 00:06:20.542 } 00:06:20.542 ] 00:06:20.542 }' 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:20.542 19:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.799 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:20.799 19:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:20.799 [2024-11-26 19:47:11.722466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:21.764 "name": "raid_bdev1", 00:06:21.764 "uuid": "5cf5a612-6c37-4a24-ac4e-64ea6943f3d8", 00:06:21.764 "strip_size_kb": 0, 00:06:21.764 "state": "online", 00:06:21.764 "raid_level": "raid1", 00:06:21.764 "superblock": true, 00:06:21.764 "num_base_bdevs": 2, 00:06:21.764 "num_base_bdevs_discovered": 2, 00:06:21.764 "num_base_bdevs_operational": 2, 00:06:21.764 "base_bdevs_list": [ 00:06:21.764 { 00:06:21.764 "name": "BaseBdev1", 00:06:21.764 "uuid": "49bb94db-e6ca-5a36-90ba-6f7920f472ad", 00:06:21.764 "is_configured": true, 00:06:21.764 "data_offset": 2048, 00:06:21.764 "data_size": 63488 00:06:21.764 }, 00:06:21.764 { 00:06:21.764 "name": "BaseBdev2", 00:06:21.764 "uuid": "fa734b01-d595-53ed-88aa-f4a3f1798ef4", 00:06:21.764 "is_configured": true, 00:06:21.764 "data_offset": 2048, 00:06:21.764 "data_size": 63488 00:06:21.764 } 00:06:21.764 ] 00:06:21.764 }' 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:21.764 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.022 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:22.022 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.022 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.022 [2024-11-26 19:47:12.948677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:22.022 [2024-11-26 19:47:12.948897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:22.022 [2024-11-26 19:47:12.952005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:22.022 [2024-11-26 19:47:12.952152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:22.022 [2024-11-26 19:47:12.952259] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:22.022 [2024-11-26 19:47:12.952273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:22.022 { 00:06:22.022 "results": [ 00:06:22.022 { 00:06:22.022 "job": "raid_bdev1", 00:06:22.022 "core_mask": "0x1", 00:06:22.022 "workload": "randrw", 00:06:22.022 "percentage": 50, 00:06:22.022 "status": "finished", 00:06:22.022 "queue_depth": 1, 00:06:22.022 "io_size": 131072, 00:06:22.022 "runtime": 1.224344, 00:06:22.022 "iops": 15596.923740386688, 00:06:22.022 "mibps": 1949.615467548336, 00:06:22.022 "io_failed": 0, 00:06:22.022 "io_timeout": 0, 00:06:22.022 "avg_latency_us": 60.87137120943573, 00:06:22.022 "min_latency_us": 29.341538461538462, 00:06:22.022 "max_latency_us": 1928.2707692307692 00:06:22.022 } 00:06:22.022 ], 00:06:22.022 "core_count": 1 00:06:22.022 } 00:06:22.022 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.022 19:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62129 00:06:22.022 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62129 ']' 00:06:22.022 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62129 00:06:22.022 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:06:22.278 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.278 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62129 00:06:22.278 killing process with pid 62129 00:06:22.278 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.278 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.278 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62129' 00:06:22.278 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62129 00:06:22.278 19:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62129 00:06:22.278 [2024-11-26 19:47:12.983796] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:22.278 [2024-11-26 19:47:13.075400] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:23.209 19:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ofCyA5adJX 00:06:23.209 19:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:23.209 19:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:23.209 ************************************ 00:06:23.209 END TEST raid_read_error_test 00:06:23.209 ************************************ 00:06:23.209 19:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:06:23.209 19:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:06:23.209 19:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:23.209 19:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:06:23.209 19:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:06:23.209 00:06:23.209 real 0m3.691s 00:06:23.209 user 0m4.376s 00:06:23.209 sys 0m0.442s 00:06:23.209 19:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.209 19:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.209 19:47:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:06:23.209 19:47:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:23.209 19:47:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.209 19:47:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:23.209 ************************************ 00:06:23.209 START TEST raid_write_error_test 00:06:23.209 ************************************ 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:23.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EgQ69LZUCU 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62269 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62269 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62269 ']' 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.209 19:47:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:23.209 [2024-11-26 19:47:14.028819] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:23.209 [2024-11-26 19:47:14.028955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62269 ] 00:06:23.466 [2024-11-26 19:47:14.190407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.466 [2024-11-26 19:47:14.310273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.723 [2024-11-26 19:47:14.459326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.723 [2024-11-26 19:47:14.459413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.980 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.980 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:23.980 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:23.980 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:23.980 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.980 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.239 BaseBdev1_malloc 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.239 true 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.239 [2024-11-26 19:47:14.937815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:24.239 [2024-11-26 19:47:14.938089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.239 [2024-11-26 19:47:14.938121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:24.239 [2024-11-26 19:47:14.938134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.239 [2024-11-26 19:47:14.940538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.239 [2024-11-26 19:47:14.940579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:24.239 BaseBdev1 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.239 BaseBdev2_malloc 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.239 true 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.239 [2024-11-26 19:47:14.984926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:24.239 [2024-11-26 19:47:14.985203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.239 [2024-11-26 19:47:14.985232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:24.239 [2024-11-26 19:47:14.985244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.239 [2024-11-26 19:47:14.987753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.239 [2024-11-26 19:47:14.987801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:24.239 BaseBdev2 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.239 [2024-11-26 19:47:14.992995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:24.239 [2024-11-26 19:47:14.995082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:24.239 [2024-11-26 19:47:14.995313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:24.239 [2024-11-26 19:47:14.995328] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:24.239 [2024-11-26 19:47:14.995647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:24.239 [2024-11-26 19:47:14.995830] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:24.239 [2024-11-26 19:47:14.995887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:24.239 [2024-11-26 19:47:14.996087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:24.239 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:24.240 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:24.240 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:24.240 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:24.240 19:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:24.240 19:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.240 19:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.240 19:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.240 19:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:24.240 "name": "raid_bdev1", 00:06:24.240 "uuid": "00e75107-6dcc-4483-bff9-6c2207d25da4", 00:06:24.240 "strip_size_kb": 0, 00:06:24.240 "state": "online", 00:06:24.240 "raid_level": "raid1", 00:06:24.240 "superblock": true, 00:06:24.240 "num_base_bdevs": 2, 00:06:24.240 "num_base_bdevs_discovered": 2, 00:06:24.240 "num_base_bdevs_operational": 2, 00:06:24.240 "base_bdevs_list": [ 00:06:24.240 { 00:06:24.240 "name": "BaseBdev1", 00:06:24.240 "uuid": "72e831ac-6a2c-5a3b-8d7b-f10d075cf80f", 00:06:24.240 "is_configured": true, 00:06:24.240 "data_offset": 2048, 00:06:24.240 "data_size": 63488 00:06:24.240 }, 00:06:24.240 { 00:06:24.240 "name": "BaseBdev2", 00:06:24.240 "uuid": "7bce054b-95f9-539c-918a-2c8dfdad2c77", 00:06:24.240 "is_configured": true, 00:06:24.240 "data_offset": 2048, 00:06:24.240 "data_size": 63488 00:06:24.240 } 00:06:24.240 ] 00:06:24.240 }' 00:06:24.240 19:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:24.240 19:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.497 19:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:24.497 19:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:24.754 [2024-11-26 19:47:15.446107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:06:25.687 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:25.687 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.687 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.687 [2024-11-26 19:47:16.325308] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:06:25.687 [2024-11-26 19:47:16.325396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:25.687 [2024-11-26 19:47:16.325605] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:06:25.687 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.687 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:25.687 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:06:25.687 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:06:25.687 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:06:25.687 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:06:25.687 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:25.687 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:25.687 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:25.687 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:25.687 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:25.688 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:25.688 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:25.688 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:25.688 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:25.688 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:25.688 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.688 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:25.688 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.688 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.688 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:25.688 "name": "raid_bdev1", 00:06:25.688 "uuid": "00e75107-6dcc-4483-bff9-6c2207d25da4", 00:06:25.688 "strip_size_kb": 0, 00:06:25.688 "state": "online", 00:06:25.688 "raid_level": "raid1", 00:06:25.688 "superblock": true, 00:06:25.688 "num_base_bdevs": 2, 00:06:25.688 "num_base_bdevs_discovered": 1, 00:06:25.688 "num_base_bdevs_operational": 1, 00:06:25.688 "base_bdevs_list": [ 00:06:25.688 { 00:06:25.688 "name": null, 00:06:25.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:25.688 "is_configured": false, 00:06:25.688 "data_offset": 0, 00:06:25.688 "data_size": 63488 00:06:25.688 }, 00:06:25.688 { 00:06:25.688 "name": "BaseBdev2", 00:06:25.688 "uuid": "7bce054b-95f9-539c-918a-2c8dfdad2c77", 00:06:25.688 "is_configured": true, 00:06:25.688 "data_offset": 2048, 00:06:25.688 "data_size": 63488 00:06:25.688 } 00:06:25.688 ] 00:06:25.688 }' 00:06:25.688 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:25.688 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.946 [2024-11-26 19:47:16.655407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:25.946 [2024-11-26 19:47:16.655438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:25.946 [2024-11-26 19:47:16.658581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:25.946 [2024-11-26 19:47:16.658708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:25.946 [2024-11-26 19:47:16.658853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:25.946 [2024-11-26 19:47:16.658936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:25.946 { 00:06:25.946 "results": [ 00:06:25.946 { 00:06:25.946 "job": "raid_bdev1", 00:06:25.946 "core_mask": "0x1", 00:06:25.946 "workload": "randrw", 00:06:25.946 "percentage": 50, 00:06:25.946 "status": "finished", 00:06:25.946 "queue_depth": 1, 00:06:25.946 "io_size": 131072, 00:06:25.946 "runtime": 1.207179, 00:06:25.946 "iops": 17843.25274048008, 00:06:25.946 "mibps": 2230.40659256001, 00:06:25.946 "io_failed": 0, 00:06:25.946 "io_timeout": 0, 00:06:25.946 "avg_latency_us": 52.83455753160489, 00:06:25.946 "min_latency_us": 28.75076923076923, 00:06:25.946 "max_latency_us": 1663.6061538461538 00:06:25.946 } 00:06:25.946 ], 00:06:25.946 "core_count": 1 00:06:25.946 } 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62269 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62269 ']' 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62269 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62269 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.946 killing process with pid 62269 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62269' 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62269 00:06:25.946 [2024-11-26 19:47:16.687521] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:25.946 19:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62269 00:06:25.946 [2024-11-26 19:47:16.775538] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:26.880 19:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EgQ69LZUCU 00:06:26.880 19:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:26.880 19:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:26.880 19:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:06:26.880 19:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:06:26.880 ************************************ 00:06:26.880 END TEST raid_write_error_test 00:06:26.880 ************************************ 00:06:26.880 19:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:26.880 19:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:06:26.880 19:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:06:26.880 00:06:26.880 real 0m3.621s 00:06:26.880 user 0m4.339s 00:06:26.880 sys 0m0.438s 00:06:26.880 19:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.880 19:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.880 19:47:17 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:26.880 19:47:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:26.880 19:47:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:06:26.880 19:47:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:26.880 19:47:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.880 19:47:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:26.880 ************************************ 00:06:26.880 START TEST raid_state_function_test 00:06:26.880 ************************************ 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:26.880 Process raid pid: 62402 00:06:26.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62402 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62402' 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62402 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62402 ']' 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.880 19:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:26.881 [2024-11-26 19:47:17.686492] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:26.881 [2024-11-26 19:47:17.686627] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.138 [2024-11-26 19:47:17.841476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.138 [2024-11-26 19:47:17.961090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.395 [2024-11-26 19:47:18.110413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.395 [2024-11-26 19:47:18.110465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.653 [2024-11-26 19:47:18.545262] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:27.653 [2024-11-26 19:47:18.545319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:27.653 [2024-11-26 19:47:18.545330] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:27.653 [2024-11-26 19:47:18.545340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:27.653 [2024-11-26 19:47:18.545361] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:27.653 [2024-11-26 19:47:18.545370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.653 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:27.653 "name": "Existed_Raid", 00:06:27.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:27.653 "strip_size_kb": 64, 00:06:27.653 "state": "configuring", 00:06:27.653 "raid_level": "raid0", 00:06:27.653 "superblock": false, 00:06:27.653 "num_base_bdevs": 3, 00:06:27.653 "num_base_bdevs_discovered": 0, 00:06:27.653 "num_base_bdevs_operational": 3, 00:06:27.654 "base_bdevs_list": [ 00:06:27.654 { 00:06:27.654 "name": "BaseBdev1", 00:06:27.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:27.654 "is_configured": false, 00:06:27.654 "data_offset": 0, 00:06:27.654 "data_size": 0 00:06:27.654 }, 00:06:27.654 { 00:06:27.654 "name": "BaseBdev2", 00:06:27.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:27.654 "is_configured": false, 00:06:27.654 "data_offset": 0, 00:06:27.654 "data_size": 0 00:06:27.654 }, 00:06:27.654 { 00:06:27.654 "name": "BaseBdev3", 00:06:27.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:27.654 "is_configured": false, 00:06:27.654 "data_offset": 0, 00:06:27.654 "data_size": 0 00:06:27.654 } 00:06:27.654 ] 00:06:27.654 }' 00:06:27.654 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:27.654 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.221 [2024-11-26 19:47:18.925309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:28.221 [2024-11-26 19:47:18.925364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.221 [2024-11-26 19:47:18.933295] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:28.221 [2024-11-26 19:47:18.933340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:28.221 [2024-11-26 19:47:18.933363] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:28.221 [2024-11-26 19:47:18.933373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:28.221 [2024-11-26 19:47:18.933379] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:28.221 [2024-11-26 19:47:18.933388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.221 [2024-11-26 19:47:18.968313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:28.221 BaseBdev1 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.221 [ 00:06:28.221 { 00:06:28.221 "name": "BaseBdev1", 00:06:28.221 "aliases": [ 00:06:28.221 "db14a4e3-da12-4097-8803-d6b9875c9a6e" 00:06:28.221 ], 00:06:28.221 "product_name": "Malloc disk", 00:06:28.221 "block_size": 512, 00:06:28.221 "num_blocks": 65536, 00:06:28.221 "uuid": "db14a4e3-da12-4097-8803-d6b9875c9a6e", 00:06:28.221 "assigned_rate_limits": { 00:06:28.221 "rw_ios_per_sec": 0, 00:06:28.221 "rw_mbytes_per_sec": 0, 00:06:28.221 "r_mbytes_per_sec": 0, 00:06:28.221 "w_mbytes_per_sec": 0 00:06:28.221 }, 00:06:28.221 "claimed": true, 00:06:28.221 "claim_type": "exclusive_write", 00:06:28.221 "zoned": false, 00:06:28.221 "supported_io_types": { 00:06:28.221 "read": true, 00:06:28.221 "write": true, 00:06:28.221 "unmap": true, 00:06:28.221 "flush": true, 00:06:28.221 "reset": true, 00:06:28.221 "nvme_admin": false, 00:06:28.221 "nvme_io": false, 00:06:28.221 "nvme_io_md": false, 00:06:28.221 "write_zeroes": true, 00:06:28.221 "zcopy": true, 00:06:28.221 "get_zone_info": false, 00:06:28.221 "zone_management": false, 00:06:28.221 "zone_append": false, 00:06:28.221 "compare": false, 00:06:28.221 "compare_and_write": false, 00:06:28.221 "abort": true, 00:06:28.221 "seek_hole": false, 00:06:28.221 "seek_data": false, 00:06:28.221 "copy": true, 00:06:28.221 "nvme_iov_md": false 00:06:28.221 }, 00:06:28.221 "memory_domains": [ 00:06:28.221 { 00:06:28.221 "dma_device_id": "system", 00:06:28.221 "dma_device_type": 1 00:06:28.221 }, 00:06:28.221 { 00:06:28.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.221 "dma_device_type": 2 00:06:28.221 } 00:06:28.221 ], 00:06:28.221 "driver_specific": {} 00:06:28.221 } 00:06:28.221 ] 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.221 19:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.221 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.221 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:28.221 "name": "Existed_Raid", 00:06:28.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:28.221 "strip_size_kb": 64, 00:06:28.221 "state": "configuring", 00:06:28.221 "raid_level": "raid0", 00:06:28.221 "superblock": false, 00:06:28.221 "num_base_bdevs": 3, 00:06:28.221 "num_base_bdevs_discovered": 1, 00:06:28.221 "num_base_bdevs_operational": 3, 00:06:28.221 "base_bdevs_list": [ 00:06:28.221 { 00:06:28.221 "name": "BaseBdev1", 00:06:28.221 "uuid": "db14a4e3-da12-4097-8803-d6b9875c9a6e", 00:06:28.221 "is_configured": true, 00:06:28.221 "data_offset": 0, 00:06:28.221 "data_size": 65536 00:06:28.221 }, 00:06:28.221 { 00:06:28.221 "name": "BaseBdev2", 00:06:28.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:28.221 "is_configured": false, 00:06:28.221 "data_offset": 0, 00:06:28.221 "data_size": 0 00:06:28.221 }, 00:06:28.221 { 00:06:28.221 "name": "BaseBdev3", 00:06:28.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:28.221 "is_configured": false, 00:06:28.221 "data_offset": 0, 00:06:28.221 "data_size": 0 00:06:28.221 } 00:06:28.221 ] 00:06:28.221 }' 00:06:28.221 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:28.221 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.492 [2024-11-26 19:47:19.356458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:28.492 [2024-11-26 19:47:19.356513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.492 [2024-11-26 19:47:19.364505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:28.492 [2024-11-26 19:47:19.366474] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:28.492 [2024-11-26 19:47:19.366513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:28.492 [2024-11-26 19:47:19.366523] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:28.492 [2024-11-26 19:47:19.366533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:28.492 "name": "Existed_Raid", 00:06:28.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:28.492 "strip_size_kb": 64, 00:06:28.492 "state": "configuring", 00:06:28.492 "raid_level": "raid0", 00:06:28.492 "superblock": false, 00:06:28.492 "num_base_bdevs": 3, 00:06:28.492 "num_base_bdevs_discovered": 1, 00:06:28.492 "num_base_bdevs_operational": 3, 00:06:28.492 "base_bdevs_list": [ 00:06:28.492 { 00:06:28.492 "name": "BaseBdev1", 00:06:28.492 "uuid": "db14a4e3-da12-4097-8803-d6b9875c9a6e", 00:06:28.492 "is_configured": true, 00:06:28.492 "data_offset": 0, 00:06:28.492 "data_size": 65536 00:06:28.492 }, 00:06:28.492 { 00:06:28.492 "name": "BaseBdev2", 00:06:28.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:28.492 "is_configured": false, 00:06:28.492 "data_offset": 0, 00:06:28.492 "data_size": 0 00:06:28.492 }, 00:06:28.492 { 00:06:28.492 "name": "BaseBdev3", 00:06:28.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:28.492 "is_configured": false, 00:06:28.492 "data_offset": 0, 00:06:28.492 "data_size": 0 00:06:28.492 } 00:06:28.492 ] 00:06:28.492 }' 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:28.492 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.749 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:28.749 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.749 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.006 [2024-11-26 19:47:19.709520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:29.006 BaseBdev2 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.006 [ 00:06:29.006 { 00:06:29.006 "name": "BaseBdev2", 00:06:29.006 "aliases": [ 00:06:29.006 "20eed5f4-96b9-44b5-a38d-7efc8330c8e9" 00:06:29.006 ], 00:06:29.006 "product_name": "Malloc disk", 00:06:29.006 "block_size": 512, 00:06:29.006 "num_blocks": 65536, 00:06:29.006 "uuid": "20eed5f4-96b9-44b5-a38d-7efc8330c8e9", 00:06:29.006 "assigned_rate_limits": { 00:06:29.006 "rw_ios_per_sec": 0, 00:06:29.006 "rw_mbytes_per_sec": 0, 00:06:29.006 "r_mbytes_per_sec": 0, 00:06:29.006 "w_mbytes_per_sec": 0 00:06:29.006 }, 00:06:29.006 "claimed": true, 00:06:29.006 "claim_type": "exclusive_write", 00:06:29.006 "zoned": false, 00:06:29.006 "supported_io_types": { 00:06:29.006 "read": true, 00:06:29.006 "write": true, 00:06:29.006 "unmap": true, 00:06:29.006 "flush": true, 00:06:29.006 "reset": true, 00:06:29.006 "nvme_admin": false, 00:06:29.006 "nvme_io": false, 00:06:29.006 "nvme_io_md": false, 00:06:29.006 "write_zeroes": true, 00:06:29.006 "zcopy": true, 00:06:29.006 "get_zone_info": false, 00:06:29.006 "zone_management": false, 00:06:29.006 "zone_append": false, 00:06:29.006 "compare": false, 00:06:29.006 "compare_and_write": false, 00:06:29.006 "abort": true, 00:06:29.006 "seek_hole": false, 00:06:29.006 "seek_data": false, 00:06:29.006 "copy": true, 00:06:29.006 "nvme_iov_md": false 00:06:29.006 }, 00:06:29.006 "memory_domains": [ 00:06:29.006 { 00:06:29.006 "dma_device_id": "system", 00:06:29.006 "dma_device_type": 1 00:06:29.006 }, 00:06:29.006 { 00:06:29.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.006 "dma_device_type": 2 00:06:29.006 } 00:06:29.006 ], 00:06:29.006 "driver_specific": {} 00:06:29.006 } 00:06:29.006 ] 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:29.006 "name": "Existed_Raid", 00:06:29.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:29.006 "strip_size_kb": 64, 00:06:29.006 "state": "configuring", 00:06:29.006 "raid_level": "raid0", 00:06:29.006 "superblock": false, 00:06:29.006 "num_base_bdevs": 3, 00:06:29.006 "num_base_bdevs_discovered": 2, 00:06:29.006 "num_base_bdevs_operational": 3, 00:06:29.006 "base_bdevs_list": [ 00:06:29.006 { 00:06:29.006 "name": "BaseBdev1", 00:06:29.006 "uuid": "db14a4e3-da12-4097-8803-d6b9875c9a6e", 00:06:29.006 "is_configured": true, 00:06:29.006 "data_offset": 0, 00:06:29.006 "data_size": 65536 00:06:29.006 }, 00:06:29.006 { 00:06:29.006 "name": "BaseBdev2", 00:06:29.006 "uuid": "20eed5f4-96b9-44b5-a38d-7efc8330c8e9", 00:06:29.006 "is_configured": true, 00:06:29.006 "data_offset": 0, 00:06:29.006 "data_size": 65536 00:06:29.006 }, 00:06:29.006 { 00:06:29.006 "name": "BaseBdev3", 00:06:29.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:29.006 "is_configured": false, 00:06:29.006 "data_offset": 0, 00:06:29.006 "data_size": 0 00:06:29.006 } 00:06:29.006 ] 00:06:29.006 }' 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:29.006 19:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.264 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:06:29.264 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.264 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.264 [2024-11-26 19:47:20.104605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:29.264 [2024-11-26 19:47:20.104650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:29.264 [2024-11-26 19:47:20.104664] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:29.264 [2024-11-26 19:47:20.104936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:29.264 [2024-11-26 19:47:20.105093] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:29.264 [2024-11-26 19:47:20.105102] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:29.264 [2024-11-26 19:47:20.105379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:29.264 BaseBdev3 00:06:29.264 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.264 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:06:29.264 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:06:29.264 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:29.264 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:29.264 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:29.264 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:29.264 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.265 [ 00:06:29.265 { 00:06:29.265 "name": "BaseBdev3", 00:06:29.265 "aliases": [ 00:06:29.265 "7a9fc3be-d3b5-47b7-8bec-6d0fad8873ee" 00:06:29.265 ], 00:06:29.265 "product_name": "Malloc disk", 00:06:29.265 "block_size": 512, 00:06:29.265 "num_blocks": 65536, 00:06:29.265 "uuid": "7a9fc3be-d3b5-47b7-8bec-6d0fad8873ee", 00:06:29.265 "assigned_rate_limits": { 00:06:29.265 "rw_ios_per_sec": 0, 00:06:29.265 "rw_mbytes_per_sec": 0, 00:06:29.265 "r_mbytes_per_sec": 0, 00:06:29.265 "w_mbytes_per_sec": 0 00:06:29.265 }, 00:06:29.265 "claimed": true, 00:06:29.265 "claim_type": "exclusive_write", 00:06:29.265 "zoned": false, 00:06:29.265 "supported_io_types": { 00:06:29.265 "read": true, 00:06:29.265 "write": true, 00:06:29.265 "unmap": true, 00:06:29.265 "flush": true, 00:06:29.265 "reset": true, 00:06:29.265 "nvme_admin": false, 00:06:29.265 "nvme_io": false, 00:06:29.265 "nvme_io_md": false, 00:06:29.265 "write_zeroes": true, 00:06:29.265 "zcopy": true, 00:06:29.265 "get_zone_info": false, 00:06:29.265 "zone_management": false, 00:06:29.265 "zone_append": false, 00:06:29.265 "compare": false, 00:06:29.265 "compare_and_write": false, 00:06:29.265 "abort": true, 00:06:29.265 "seek_hole": false, 00:06:29.265 "seek_data": false, 00:06:29.265 "copy": true, 00:06:29.265 "nvme_iov_md": false 00:06:29.265 }, 00:06:29.265 "memory_domains": [ 00:06:29.265 { 00:06:29.265 "dma_device_id": "system", 00:06:29.265 "dma_device_type": 1 00:06:29.265 }, 00:06:29.265 { 00:06:29.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.265 "dma_device_type": 2 00:06:29.265 } 00:06:29.265 ], 00:06:29.265 "driver_specific": {} 00:06:29.265 } 00:06:29.265 ] 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:29.265 "name": "Existed_Raid", 00:06:29.265 "uuid": "6335b8a7-5b01-4f76-83ed-df88ba2f1ea7", 00:06:29.265 "strip_size_kb": 64, 00:06:29.265 "state": "online", 00:06:29.265 "raid_level": "raid0", 00:06:29.265 "superblock": false, 00:06:29.265 "num_base_bdevs": 3, 00:06:29.265 "num_base_bdevs_discovered": 3, 00:06:29.265 "num_base_bdevs_operational": 3, 00:06:29.265 "base_bdevs_list": [ 00:06:29.265 { 00:06:29.265 "name": "BaseBdev1", 00:06:29.265 "uuid": "db14a4e3-da12-4097-8803-d6b9875c9a6e", 00:06:29.265 "is_configured": true, 00:06:29.265 "data_offset": 0, 00:06:29.265 "data_size": 65536 00:06:29.265 }, 00:06:29.265 { 00:06:29.265 "name": "BaseBdev2", 00:06:29.265 "uuid": "20eed5f4-96b9-44b5-a38d-7efc8330c8e9", 00:06:29.265 "is_configured": true, 00:06:29.265 "data_offset": 0, 00:06:29.265 "data_size": 65536 00:06:29.265 }, 00:06:29.265 { 00:06:29.265 "name": "BaseBdev3", 00:06:29.265 "uuid": "7a9fc3be-d3b5-47b7-8bec-6d0fad8873ee", 00:06:29.265 "is_configured": true, 00:06:29.265 "data_offset": 0, 00:06:29.265 "data_size": 65536 00:06:29.265 } 00:06:29.265 ] 00:06:29.265 }' 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:29.265 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.523 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:29.523 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:29.523 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:29.523 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:29.523 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:29.523 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:29.523 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:29.523 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.523 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.523 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:29.781 [2024-11-26 19:47:20.461086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:29.781 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.781 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:29.781 "name": "Existed_Raid", 00:06:29.781 "aliases": [ 00:06:29.781 "6335b8a7-5b01-4f76-83ed-df88ba2f1ea7" 00:06:29.781 ], 00:06:29.781 "product_name": "Raid Volume", 00:06:29.781 "block_size": 512, 00:06:29.781 "num_blocks": 196608, 00:06:29.781 "uuid": "6335b8a7-5b01-4f76-83ed-df88ba2f1ea7", 00:06:29.781 "assigned_rate_limits": { 00:06:29.781 "rw_ios_per_sec": 0, 00:06:29.781 "rw_mbytes_per_sec": 0, 00:06:29.781 "r_mbytes_per_sec": 0, 00:06:29.781 "w_mbytes_per_sec": 0 00:06:29.781 }, 00:06:29.781 "claimed": false, 00:06:29.781 "zoned": false, 00:06:29.781 "supported_io_types": { 00:06:29.781 "read": true, 00:06:29.781 "write": true, 00:06:29.781 "unmap": true, 00:06:29.781 "flush": true, 00:06:29.781 "reset": true, 00:06:29.781 "nvme_admin": false, 00:06:29.781 "nvme_io": false, 00:06:29.781 "nvme_io_md": false, 00:06:29.781 "write_zeroes": true, 00:06:29.781 "zcopy": false, 00:06:29.781 "get_zone_info": false, 00:06:29.781 "zone_management": false, 00:06:29.781 "zone_append": false, 00:06:29.781 "compare": false, 00:06:29.781 "compare_and_write": false, 00:06:29.781 "abort": false, 00:06:29.781 "seek_hole": false, 00:06:29.781 "seek_data": false, 00:06:29.781 "copy": false, 00:06:29.781 "nvme_iov_md": false 00:06:29.781 }, 00:06:29.781 "memory_domains": [ 00:06:29.781 { 00:06:29.781 "dma_device_id": "system", 00:06:29.781 "dma_device_type": 1 00:06:29.781 }, 00:06:29.781 { 00:06:29.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.781 "dma_device_type": 2 00:06:29.781 }, 00:06:29.781 { 00:06:29.781 "dma_device_id": "system", 00:06:29.781 "dma_device_type": 1 00:06:29.781 }, 00:06:29.781 { 00:06:29.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.781 "dma_device_type": 2 00:06:29.781 }, 00:06:29.781 { 00:06:29.781 "dma_device_id": "system", 00:06:29.781 "dma_device_type": 1 00:06:29.781 }, 00:06:29.781 { 00:06:29.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.781 "dma_device_type": 2 00:06:29.781 } 00:06:29.781 ], 00:06:29.781 "driver_specific": { 00:06:29.781 "raid": { 00:06:29.781 "uuid": "6335b8a7-5b01-4f76-83ed-df88ba2f1ea7", 00:06:29.781 "strip_size_kb": 64, 00:06:29.781 "state": "online", 00:06:29.781 "raid_level": "raid0", 00:06:29.781 "superblock": false, 00:06:29.781 "num_base_bdevs": 3, 00:06:29.781 "num_base_bdevs_discovered": 3, 00:06:29.781 "num_base_bdevs_operational": 3, 00:06:29.781 "base_bdevs_list": [ 00:06:29.781 { 00:06:29.781 "name": "BaseBdev1", 00:06:29.781 "uuid": "db14a4e3-da12-4097-8803-d6b9875c9a6e", 00:06:29.781 "is_configured": true, 00:06:29.781 "data_offset": 0, 00:06:29.781 "data_size": 65536 00:06:29.781 }, 00:06:29.781 { 00:06:29.781 "name": "BaseBdev2", 00:06:29.781 "uuid": "20eed5f4-96b9-44b5-a38d-7efc8330c8e9", 00:06:29.781 "is_configured": true, 00:06:29.781 "data_offset": 0, 00:06:29.781 "data_size": 65536 00:06:29.781 }, 00:06:29.782 { 00:06:29.782 "name": "BaseBdev3", 00:06:29.782 "uuid": "7a9fc3be-d3b5-47b7-8bec-6d0fad8873ee", 00:06:29.782 "is_configured": true, 00:06:29.782 "data_offset": 0, 00:06:29.782 "data_size": 65536 00:06:29.782 } 00:06:29.782 ] 00:06:29.782 } 00:06:29.782 } 00:06:29.782 }' 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:29.782 BaseBdev2 00:06:29.782 BaseBdev3' 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.782 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.782 [2024-11-26 19:47:20.665024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:29.782 [2024-11-26 19:47:20.665056] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:29.782 [2024-11-26 19:47:20.665114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:30.040 "name": "Existed_Raid", 00:06:30.040 "uuid": "6335b8a7-5b01-4f76-83ed-df88ba2f1ea7", 00:06:30.040 "strip_size_kb": 64, 00:06:30.040 "state": "offline", 00:06:30.040 "raid_level": "raid0", 00:06:30.040 "superblock": false, 00:06:30.040 "num_base_bdevs": 3, 00:06:30.040 "num_base_bdevs_discovered": 2, 00:06:30.040 "num_base_bdevs_operational": 2, 00:06:30.040 "base_bdevs_list": [ 00:06:30.040 { 00:06:30.040 "name": null, 00:06:30.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:30.040 "is_configured": false, 00:06:30.040 "data_offset": 0, 00:06:30.040 "data_size": 65536 00:06:30.040 }, 00:06:30.040 { 00:06:30.040 "name": "BaseBdev2", 00:06:30.040 "uuid": "20eed5f4-96b9-44b5-a38d-7efc8330c8e9", 00:06:30.040 "is_configured": true, 00:06:30.040 "data_offset": 0, 00:06:30.040 "data_size": 65536 00:06:30.040 }, 00:06:30.040 { 00:06:30.040 "name": "BaseBdev3", 00:06:30.040 "uuid": "7a9fc3be-d3b5-47b7-8bec-6d0fad8873ee", 00:06:30.040 "is_configured": true, 00:06:30.040 "data_offset": 0, 00:06:30.040 "data_size": 65536 00:06:30.040 } 00:06:30.040 ] 00:06:30.040 }' 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:30.040 19:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.298 [2024-11-26 19:47:21.071516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.298 [2024-11-26 19:47:21.174445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:06:30.298 [2024-11-26 19:47:21.174680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.298 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.556 BaseBdev2 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.556 [ 00:06:30.556 { 00:06:30.556 "name": "BaseBdev2", 00:06:30.556 "aliases": [ 00:06:30.556 "b1083cf7-49ad-4653-bfa2-6bf82f3b1ece" 00:06:30.556 ], 00:06:30.556 "product_name": "Malloc disk", 00:06:30.556 "block_size": 512, 00:06:30.556 "num_blocks": 65536, 00:06:30.556 "uuid": "b1083cf7-49ad-4653-bfa2-6bf82f3b1ece", 00:06:30.556 "assigned_rate_limits": { 00:06:30.556 "rw_ios_per_sec": 0, 00:06:30.556 "rw_mbytes_per_sec": 0, 00:06:30.556 "r_mbytes_per_sec": 0, 00:06:30.556 "w_mbytes_per_sec": 0 00:06:30.556 }, 00:06:30.556 "claimed": false, 00:06:30.556 "zoned": false, 00:06:30.556 "supported_io_types": { 00:06:30.556 "read": true, 00:06:30.556 "write": true, 00:06:30.556 "unmap": true, 00:06:30.556 "flush": true, 00:06:30.556 "reset": true, 00:06:30.556 "nvme_admin": false, 00:06:30.556 "nvme_io": false, 00:06:30.556 "nvme_io_md": false, 00:06:30.556 "write_zeroes": true, 00:06:30.556 "zcopy": true, 00:06:30.556 "get_zone_info": false, 00:06:30.556 "zone_management": false, 00:06:30.556 "zone_append": false, 00:06:30.556 "compare": false, 00:06:30.556 "compare_and_write": false, 00:06:30.556 "abort": true, 00:06:30.556 "seek_hole": false, 00:06:30.556 "seek_data": false, 00:06:30.556 "copy": true, 00:06:30.556 "nvme_iov_md": false 00:06:30.556 }, 00:06:30.556 "memory_domains": [ 00:06:30.556 { 00:06:30.556 "dma_device_id": "system", 00:06:30.556 "dma_device_type": 1 00:06:30.556 }, 00:06:30.556 { 00:06:30.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:30.556 "dma_device_type": 2 00:06:30.556 } 00:06:30.556 ], 00:06:30.556 "driver_specific": {} 00:06:30.556 } 00:06:30.556 ] 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.556 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.557 BaseBdev3 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.557 [ 00:06:30.557 { 00:06:30.557 "name": "BaseBdev3", 00:06:30.557 "aliases": [ 00:06:30.557 "a6555680-3187-4f79-acc4-f3818fda5883" 00:06:30.557 ], 00:06:30.557 "product_name": "Malloc disk", 00:06:30.557 "block_size": 512, 00:06:30.557 "num_blocks": 65536, 00:06:30.557 "uuid": "a6555680-3187-4f79-acc4-f3818fda5883", 00:06:30.557 "assigned_rate_limits": { 00:06:30.557 "rw_ios_per_sec": 0, 00:06:30.557 "rw_mbytes_per_sec": 0, 00:06:30.557 "r_mbytes_per_sec": 0, 00:06:30.557 "w_mbytes_per_sec": 0 00:06:30.557 }, 00:06:30.557 "claimed": false, 00:06:30.557 "zoned": false, 00:06:30.557 "supported_io_types": { 00:06:30.557 "read": true, 00:06:30.557 "write": true, 00:06:30.557 "unmap": true, 00:06:30.557 "flush": true, 00:06:30.557 "reset": true, 00:06:30.557 "nvme_admin": false, 00:06:30.557 "nvme_io": false, 00:06:30.557 "nvme_io_md": false, 00:06:30.557 "write_zeroes": true, 00:06:30.557 "zcopy": true, 00:06:30.557 "get_zone_info": false, 00:06:30.557 "zone_management": false, 00:06:30.557 "zone_append": false, 00:06:30.557 "compare": false, 00:06:30.557 "compare_and_write": false, 00:06:30.557 "abort": true, 00:06:30.557 "seek_hole": false, 00:06:30.557 "seek_data": false, 00:06:30.557 "copy": true, 00:06:30.557 "nvme_iov_md": false 00:06:30.557 }, 00:06:30.557 "memory_domains": [ 00:06:30.557 { 00:06:30.557 "dma_device_id": "system", 00:06:30.557 "dma_device_type": 1 00:06:30.557 }, 00:06:30.557 { 00:06:30.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:30.557 "dma_device_type": 2 00:06:30.557 } 00:06:30.557 ], 00:06:30.557 "driver_specific": {} 00:06:30.557 } 00:06:30.557 ] 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.557 [2024-11-26 19:47:21.373394] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:30.557 [2024-11-26 19:47:21.373450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:30.557 [2024-11-26 19:47:21.373472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:30.557 [2024-11-26 19:47:21.375143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:30.557 "name": "Existed_Raid", 00:06:30.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:30.557 "strip_size_kb": 64, 00:06:30.557 "state": "configuring", 00:06:30.557 "raid_level": "raid0", 00:06:30.557 "superblock": false, 00:06:30.557 "num_base_bdevs": 3, 00:06:30.557 "num_base_bdevs_discovered": 2, 00:06:30.557 "num_base_bdevs_operational": 3, 00:06:30.557 "base_bdevs_list": [ 00:06:30.557 { 00:06:30.557 "name": "BaseBdev1", 00:06:30.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:30.557 "is_configured": false, 00:06:30.557 "data_offset": 0, 00:06:30.557 "data_size": 0 00:06:30.557 }, 00:06:30.557 { 00:06:30.557 "name": "BaseBdev2", 00:06:30.557 "uuid": "b1083cf7-49ad-4653-bfa2-6bf82f3b1ece", 00:06:30.557 "is_configured": true, 00:06:30.557 "data_offset": 0, 00:06:30.557 "data_size": 65536 00:06:30.557 }, 00:06:30.557 { 00:06:30.557 "name": "BaseBdev3", 00:06:30.557 "uuid": "a6555680-3187-4f79-acc4-f3818fda5883", 00:06:30.557 "is_configured": true, 00:06:30.557 "data_offset": 0, 00:06:30.557 "data_size": 65536 00:06:30.557 } 00:06:30.557 ] 00:06:30.557 }' 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:30.557 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.815 [2024-11-26 19:47:21.725492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:30.815 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.108 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:31.108 "name": "Existed_Raid", 00:06:31.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:31.108 "strip_size_kb": 64, 00:06:31.108 "state": "configuring", 00:06:31.108 "raid_level": "raid0", 00:06:31.108 "superblock": false, 00:06:31.108 "num_base_bdevs": 3, 00:06:31.108 "num_base_bdevs_discovered": 1, 00:06:31.108 "num_base_bdevs_operational": 3, 00:06:31.108 "base_bdevs_list": [ 00:06:31.108 { 00:06:31.108 "name": "BaseBdev1", 00:06:31.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:31.108 "is_configured": false, 00:06:31.108 "data_offset": 0, 00:06:31.108 "data_size": 0 00:06:31.108 }, 00:06:31.108 { 00:06:31.108 "name": null, 00:06:31.108 "uuid": "b1083cf7-49ad-4653-bfa2-6bf82f3b1ece", 00:06:31.108 "is_configured": false, 00:06:31.108 "data_offset": 0, 00:06:31.108 "data_size": 65536 00:06:31.108 }, 00:06:31.108 { 00:06:31.108 "name": "BaseBdev3", 00:06:31.108 "uuid": "a6555680-3187-4f79-acc4-f3818fda5883", 00:06:31.108 "is_configured": true, 00:06:31.108 "data_offset": 0, 00:06:31.108 "data_size": 65536 00:06:31.108 } 00:06:31.108 ] 00:06:31.108 }' 00:06:31.108 19:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:31.108 19:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.396 [2024-11-26 19:47:22.102236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:31.396 BaseBdev1 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.396 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.396 [ 00:06:31.396 { 00:06:31.396 "name": "BaseBdev1", 00:06:31.396 "aliases": [ 00:06:31.396 "36ab995c-616b-449b-8742-0da9d95ec07e" 00:06:31.396 ], 00:06:31.396 "product_name": "Malloc disk", 00:06:31.396 "block_size": 512, 00:06:31.396 "num_blocks": 65536, 00:06:31.396 "uuid": "36ab995c-616b-449b-8742-0da9d95ec07e", 00:06:31.396 "assigned_rate_limits": { 00:06:31.396 "rw_ios_per_sec": 0, 00:06:31.396 "rw_mbytes_per_sec": 0, 00:06:31.396 "r_mbytes_per_sec": 0, 00:06:31.396 "w_mbytes_per_sec": 0 00:06:31.396 }, 00:06:31.396 "claimed": true, 00:06:31.396 "claim_type": "exclusive_write", 00:06:31.396 "zoned": false, 00:06:31.396 "supported_io_types": { 00:06:31.396 "read": true, 00:06:31.396 "write": true, 00:06:31.396 "unmap": true, 00:06:31.396 "flush": true, 00:06:31.396 "reset": true, 00:06:31.396 "nvme_admin": false, 00:06:31.396 "nvme_io": false, 00:06:31.396 "nvme_io_md": false, 00:06:31.396 "write_zeroes": true, 00:06:31.396 "zcopy": true, 00:06:31.396 "get_zone_info": false, 00:06:31.396 "zone_management": false, 00:06:31.396 "zone_append": false, 00:06:31.396 "compare": false, 00:06:31.396 "compare_and_write": false, 00:06:31.396 "abort": true, 00:06:31.396 "seek_hole": false, 00:06:31.396 "seek_data": false, 00:06:31.396 "copy": true, 00:06:31.396 "nvme_iov_md": false 00:06:31.396 }, 00:06:31.396 "memory_domains": [ 00:06:31.396 { 00:06:31.396 "dma_device_id": "system", 00:06:31.396 "dma_device_type": 1 00:06:31.396 }, 00:06:31.396 { 00:06:31.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:31.396 "dma_device_type": 2 00:06:31.396 } 00:06:31.396 ], 00:06:31.396 "driver_specific": {} 00:06:31.396 } 00:06:31.396 ] 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:31.397 "name": "Existed_Raid", 00:06:31.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:31.397 "strip_size_kb": 64, 00:06:31.397 "state": "configuring", 00:06:31.397 "raid_level": "raid0", 00:06:31.397 "superblock": false, 00:06:31.397 "num_base_bdevs": 3, 00:06:31.397 "num_base_bdevs_discovered": 2, 00:06:31.397 "num_base_bdevs_operational": 3, 00:06:31.397 "base_bdevs_list": [ 00:06:31.397 { 00:06:31.397 "name": "BaseBdev1", 00:06:31.397 "uuid": "36ab995c-616b-449b-8742-0da9d95ec07e", 00:06:31.397 "is_configured": true, 00:06:31.397 "data_offset": 0, 00:06:31.397 "data_size": 65536 00:06:31.397 }, 00:06:31.397 { 00:06:31.397 "name": null, 00:06:31.397 "uuid": "b1083cf7-49ad-4653-bfa2-6bf82f3b1ece", 00:06:31.397 "is_configured": false, 00:06:31.397 "data_offset": 0, 00:06:31.397 "data_size": 65536 00:06:31.397 }, 00:06:31.397 { 00:06:31.397 "name": "BaseBdev3", 00:06:31.397 "uuid": "a6555680-3187-4f79-acc4-f3818fda5883", 00:06:31.397 "is_configured": true, 00:06:31.397 "data_offset": 0, 00:06:31.397 "data_size": 65536 00:06:31.397 } 00:06:31.397 ] 00:06:31.397 }' 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:31.397 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.655 [2024-11-26 19:47:22.470383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:31.655 "name": "Existed_Raid", 00:06:31.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:31.655 "strip_size_kb": 64, 00:06:31.655 "state": "configuring", 00:06:31.655 "raid_level": "raid0", 00:06:31.655 "superblock": false, 00:06:31.655 "num_base_bdevs": 3, 00:06:31.655 "num_base_bdevs_discovered": 1, 00:06:31.655 "num_base_bdevs_operational": 3, 00:06:31.655 "base_bdevs_list": [ 00:06:31.655 { 00:06:31.655 "name": "BaseBdev1", 00:06:31.655 "uuid": "36ab995c-616b-449b-8742-0da9d95ec07e", 00:06:31.655 "is_configured": true, 00:06:31.655 "data_offset": 0, 00:06:31.655 "data_size": 65536 00:06:31.655 }, 00:06:31.655 { 00:06:31.655 "name": null, 00:06:31.655 "uuid": "b1083cf7-49ad-4653-bfa2-6bf82f3b1ece", 00:06:31.655 "is_configured": false, 00:06:31.655 "data_offset": 0, 00:06:31.655 "data_size": 65536 00:06:31.655 }, 00:06:31.655 { 00:06:31.655 "name": null, 00:06:31.655 "uuid": "a6555680-3187-4f79-acc4-f3818fda5883", 00:06:31.655 "is_configured": false, 00:06:31.655 "data_offset": 0, 00:06:31.655 "data_size": 65536 00:06:31.655 } 00:06:31.655 ] 00:06:31.655 }' 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:31.655 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.931 [2024-11-26 19:47:22.810465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:31.931 "name": "Existed_Raid", 00:06:31.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:31.931 "strip_size_kb": 64, 00:06:31.931 "state": "configuring", 00:06:31.931 "raid_level": "raid0", 00:06:31.931 "superblock": false, 00:06:31.931 "num_base_bdevs": 3, 00:06:31.931 "num_base_bdevs_discovered": 2, 00:06:31.931 "num_base_bdevs_operational": 3, 00:06:31.931 "base_bdevs_list": [ 00:06:31.931 { 00:06:31.931 "name": "BaseBdev1", 00:06:31.931 "uuid": "36ab995c-616b-449b-8742-0da9d95ec07e", 00:06:31.931 "is_configured": true, 00:06:31.931 "data_offset": 0, 00:06:31.931 "data_size": 65536 00:06:31.931 }, 00:06:31.931 { 00:06:31.931 "name": null, 00:06:31.931 "uuid": "b1083cf7-49ad-4653-bfa2-6bf82f3b1ece", 00:06:31.931 "is_configured": false, 00:06:31.931 "data_offset": 0, 00:06:31.931 "data_size": 65536 00:06:31.931 }, 00:06:31.931 { 00:06:31.931 "name": "BaseBdev3", 00:06:31.931 "uuid": "a6555680-3187-4f79-acc4-f3818fda5883", 00:06:31.931 "is_configured": true, 00:06:31.931 "data_offset": 0, 00:06:31.931 "data_size": 65536 00:06:31.931 } 00:06:31.931 ] 00:06:31.931 }' 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:31.931 19:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.497 [2024-11-26 19:47:23.202540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.497 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:32.497 "name": "Existed_Raid", 00:06:32.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:32.497 "strip_size_kb": 64, 00:06:32.497 "state": "configuring", 00:06:32.497 "raid_level": "raid0", 00:06:32.497 "superblock": false, 00:06:32.497 "num_base_bdevs": 3, 00:06:32.497 "num_base_bdevs_discovered": 1, 00:06:32.497 "num_base_bdevs_operational": 3, 00:06:32.497 "base_bdevs_list": [ 00:06:32.497 { 00:06:32.497 "name": null, 00:06:32.497 "uuid": "36ab995c-616b-449b-8742-0da9d95ec07e", 00:06:32.497 "is_configured": false, 00:06:32.497 "data_offset": 0, 00:06:32.497 "data_size": 65536 00:06:32.497 }, 00:06:32.497 { 00:06:32.497 "name": null, 00:06:32.497 "uuid": "b1083cf7-49ad-4653-bfa2-6bf82f3b1ece", 00:06:32.498 "is_configured": false, 00:06:32.498 "data_offset": 0, 00:06:32.498 "data_size": 65536 00:06:32.498 }, 00:06:32.498 { 00:06:32.498 "name": "BaseBdev3", 00:06:32.498 "uuid": "a6555680-3187-4f79-acc4-f3818fda5883", 00:06:32.498 "is_configured": true, 00:06:32.498 "data_offset": 0, 00:06:32.498 "data_size": 65536 00:06:32.498 } 00:06:32.498 ] 00:06:32.498 }' 00:06:32.498 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:32.498 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.756 [2024-11-26 19:47:23.599867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.756 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.757 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.757 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:32.757 "name": "Existed_Raid", 00:06:32.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:32.757 "strip_size_kb": 64, 00:06:32.757 "state": "configuring", 00:06:32.757 "raid_level": "raid0", 00:06:32.757 "superblock": false, 00:06:32.757 "num_base_bdevs": 3, 00:06:32.757 "num_base_bdevs_discovered": 2, 00:06:32.757 "num_base_bdevs_operational": 3, 00:06:32.757 "base_bdevs_list": [ 00:06:32.757 { 00:06:32.757 "name": null, 00:06:32.757 "uuid": "36ab995c-616b-449b-8742-0da9d95ec07e", 00:06:32.757 "is_configured": false, 00:06:32.757 "data_offset": 0, 00:06:32.757 "data_size": 65536 00:06:32.757 }, 00:06:32.757 { 00:06:32.757 "name": "BaseBdev2", 00:06:32.757 "uuid": "b1083cf7-49ad-4653-bfa2-6bf82f3b1ece", 00:06:32.757 "is_configured": true, 00:06:32.757 "data_offset": 0, 00:06:32.757 "data_size": 65536 00:06:32.757 }, 00:06:32.757 { 00:06:32.757 "name": "BaseBdev3", 00:06:32.757 "uuid": "a6555680-3187-4f79-acc4-f3818fda5883", 00:06:32.757 "is_configured": true, 00:06:32.757 "data_offset": 0, 00:06:32.757 "data_size": 65536 00:06:32.757 } 00:06:32.757 ] 00:06:32.757 }' 00:06:32.757 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:32.757 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.015 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.016 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.016 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.016 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:06:33.274 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.274 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:06:33.274 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.274 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:06:33.274 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.274 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.274 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.274 19:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 36ab995c-616b-449b-8742-0da9d95ec07e 00:06:33.274 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.274 19:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.274 [2024-11-26 19:47:24.024572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:06:33.274 [2024-11-26 19:47:24.024784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:06:33.274 [2024-11-26 19:47:24.024801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:33.274 [2024-11-26 19:47:24.025031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:33.274 [2024-11-26 19:47:24.025154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:06:33.274 [2024-11-26 19:47:24.025160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:06:33.274 [2024-11-26 19:47:24.025403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.274 NewBaseBdev 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.274 [ 00:06:33.274 { 00:06:33.274 "name": "NewBaseBdev", 00:06:33.274 "aliases": [ 00:06:33.274 "36ab995c-616b-449b-8742-0da9d95ec07e" 00:06:33.274 ], 00:06:33.274 "product_name": "Malloc disk", 00:06:33.274 "block_size": 512, 00:06:33.274 "num_blocks": 65536, 00:06:33.274 "uuid": "36ab995c-616b-449b-8742-0da9d95ec07e", 00:06:33.274 "assigned_rate_limits": { 00:06:33.274 "rw_ios_per_sec": 0, 00:06:33.274 "rw_mbytes_per_sec": 0, 00:06:33.274 "r_mbytes_per_sec": 0, 00:06:33.274 "w_mbytes_per_sec": 0 00:06:33.274 }, 00:06:33.274 "claimed": true, 00:06:33.274 "claim_type": "exclusive_write", 00:06:33.274 "zoned": false, 00:06:33.274 "supported_io_types": { 00:06:33.274 "read": true, 00:06:33.274 "write": true, 00:06:33.274 "unmap": true, 00:06:33.274 "flush": true, 00:06:33.274 "reset": true, 00:06:33.274 "nvme_admin": false, 00:06:33.274 "nvme_io": false, 00:06:33.274 "nvme_io_md": false, 00:06:33.274 "write_zeroes": true, 00:06:33.274 "zcopy": true, 00:06:33.274 "get_zone_info": false, 00:06:33.274 "zone_management": false, 00:06:33.274 "zone_append": false, 00:06:33.274 "compare": false, 00:06:33.274 "compare_and_write": false, 00:06:33.274 "abort": true, 00:06:33.274 "seek_hole": false, 00:06:33.274 "seek_data": false, 00:06:33.274 "copy": true, 00:06:33.274 "nvme_iov_md": false 00:06:33.274 }, 00:06:33.274 "memory_domains": [ 00:06:33.274 { 00:06:33.274 "dma_device_id": "system", 00:06:33.274 "dma_device_type": 1 00:06:33.274 }, 00:06:33.274 { 00:06:33.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.274 "dma_device_type": 2 00:06:33.274 } 00:06:33.274 ], 00:06:33.274 "driver_specific": {} 00:06:33.274 } 00:06:33.274 ] 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:33.274 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:33.275 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:33.275 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:33.275 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.275 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.275 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.275 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:33.275 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.275 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:33.275 "name": "Existed_Raid", 00:06:33.275 "uuid": "787e1666-fbd6-41a9-a27f-be5f1ebb0fd3", 00:06:33.275 "strip_size_kb": 64, 00:06:33.275 "state": "online", 00:06:33.275 "raid_level": "raid0", 00:06:33.275 "superblock": false, 00:06:33.275 "num_base_bdevs": 3, 00:06:33.275 "num_base_bdevs_discovered": 3, 00:06:33.275 "num_base_bdevs_operational": 3, 00:06:33.275 "base_bdevs_list": [ 00:06:33.275 { 00:06:33.275 "name": "NewBaseBdev", 00:06:33.275 "uuid": "36ab995c-616b-449b-8742-0da9d95ec07e", 00:06:33.275 "is_configured": true, 00:06:33.275 "data_offset": 0, 00:06:33.275 "data_size": 65536 00:06:33.275 }, 00:06:33.275 { 00:06:33.275 "name": "BaseBdev2", 00:06:33.275 "uuid": "b1083cf7-49ad-4653-bfa2-6bf82f3b1ece", 00:06:33.275 "is_configured": true, 00:06:33.275 "data_offset": 0, 00:06:33.275 "data_size": 65536 00:06:33.275 }, 00:06:33.275 { 00:06:33.275 "name": "BaseBdev3", 00:06:33.275 "uuid": "a6555680-3187-4f79-acc4-f3818fda5883", 00:06:33.275 "is_configured": true, 00:06:33.275 "data_offset": 0, 00:06:33.275 "data_size": 65536 00:06:33.275 } 00:06:33.275 ] 00:06:33.275 }' 00:06:33.275 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:33.275 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.533 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:06:33.533 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:33.533 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:33.533 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:33.533 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:33.533 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:33.533 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:33.533 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.533 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.533 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:33.533 [2024-11-26 19:47:24.396986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:33.533 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.533 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:33.533 "name": "Existed_Raid", 00:06:33.533 "aliases": [ 00:06:33.533 "787e1666-fbd6-41a9-a27f-be5f1ebb0fd3" 00:06:33.533 ], 00:06:33.533 "product_name": "Raid Volume", 00:06:33.533 "block_size": 512, 00:06:33.533 "num_blocks": 196608, 00:06:33.533 "uuid": "787e1666-fbd6-41a9-a27f-be5f1ebb0fd3", 00:06:33.533 "assigned_rate_limits": { 00:06:33.533 "rw_ios_per_sec": 0, 00:06:33.533 "rw_mbytes_per_sec": 0, 00:06:33.533 "r_mbytes_per_sec": 0, 00:06:33.533 "w_mbytes_per_sec": 0 00:06:33.534 }, 00:06:33.534 "claimed": false, 00:06:33.534 "zoned": false, 00:06:33.534 "supported_io_types": { 00:06:33.534 "read": true, 00:06:33.534 "write": true, 00:06:33.534 "unmap": true, 00:06:33.534 "flush": true, 00:06:33.534 "reset": true, 00:06:33.534 "nvme_admin": false, 00:06:33.534 "nvme_io": false, 00:06:33.534 "nvme_io_md": false, 00:06:33.534 "write_zeroes": true, 00:06:33.534 "zcopy": false, 00:06:33.534 "get_zone_info": false, 00:06:33.534 "zone_management": false, 00:06:33.534 "zone_append": false, 00:06:33.534 "compare": false, 00:06:33.534 "compare_and_write": false, 00:06:33.534 "abort": false, 00:06:33.534 "seek_hole": false, 00:06:33.534 "seek_data": false, 00:06:33.534 "copy": false, 00:06:33.534 "nvme_iov_md": false 00:06:33.534 }, 00:06:33.534 "memory_domains": [ 00:06:33.534 { 00:06:33.534 "dma_device_id": "system", 00:06:33.534 "dma_device_type": 1 00:06:33.534 }, 00:06:33.534 { 00:06:33.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.534 "dma_device_type": 2 00:06:33.534 }, 00:06:33.534 { 00:06:33.534 "dma_device_id": "system", 00:06:33.534 "dma_device_type": 1 00:06:33.534 }, 00:06:33.534 { 00:06:33.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.534 "dma_device_type": 2 00:06:33.534 }, 00:06:33.534 { 00:06:33.534 "dma_device_id": "system", 00:06:33.534 "dma_device_type": 1 00:06:33.534 }, 00:06:33.534 { 00:06:33.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.534 "dma_device_type": 2 00:06:33.534 } 00:06:33.534 ], 00:06:33.534 "driver_specific": { 00:06:33.534 "raid": { 00:06:33.534 "uuid": "787e1666-fbd6-41a9-a27f-be5f1ebb0fd3", 00:06:33.534 "strip_size_kb": 64, 00:06:33.534 "state": "online", 00:06:33.534 "raid_level": "raid0", 00:06:33.534 "superblock": false, 00:06:33.534 "num_base_bdevs": 3, 00:06:33.534 "num_base_bdevs_discovered": 3, 00:06:33.534 "num_base_bdevs_operational": 3, 00:06:33.534 "base_bdevs_list": [ 00:06:33.534 { 00:06:33.534 "name": "NewBaseBdev", 00:06:33.534 "uuid": "36ab995c-616b-449b-8742-0da9d95ec07e", 00:06:33.534 "is_configured": true, 00:06:33.534 "data_offset": 0, 00:06:33.534 "data_size": 65536 00:06:33.534 }, 00:06:33.534 { 00:06:33.534 "name": "BaseBdev2", 00:06:33.534 "uuid": "b1083cf7-49ad-4653-bfa2-6bf82f3b1ece", 00:06:33.534 "is_configured": true, 00:06:33.534 "data_offset": 0, 00:06:33.534 "data_size": 65536 00:06:33.534 }, 00:06:33.534 { 00:06:33.534 "name": "BaseBdev3", 00:06:33.534 "uuid": "a6555680-3187-4f79-acc4-f3818fda5883", 00:06:33.534 "is_configured": true, 00:06:33.534 "data_offset": 0, 00:06:33.534 "data_size": 65536 00:06:33.534 } 00:06:33.534 ] 00:06:33.534 } 00:06:33.534 } 00:06:33.534 }' 00:06:33.534 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:33.534 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:06:33.534 BaseBdev2 00:06:33.534 BaseBdev3' 00:06:33.534 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.792 [2024-11-26 19:47:24.592736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:33.792 [2024-11-26 19:47:24.592769] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:33.792 [2024-11-26 19:47:24.592849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:33.792 [2024-11-26 19:47:24.592909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:33.792 [2024-11-26 19:47:24.592920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62402 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62402 ']' 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62402 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62402 00:06:33.792 killing process with pid 62402 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62402' 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62402 00:06:33.792 [2024-11-26 19:47:24.623130] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:33.792 19:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62402 00:06:34.050 [2024-11-26 19:47:24.779390] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:34.657 00:06:34.657 real 0m7.786s 00:06:34.657 user 0m12.473s 00:06:34.657 sys 0m1.336s 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.657 ************************************ 00:06:34.657 END TEST raid_state_function_test 00:06:34.657 ************************************ 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.657 19:47:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:06:34.657 19:47:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:34.657 19:47:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.657 19:47:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:34.657 ************************************ 00:06:34.657 START TEST raid_state_function_test_sb 00:06:34.657 ************************************ 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:34.657 Process raid pid: 62995 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62995 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62995' 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62995 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62995 ']' 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.657 19:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:34.657 [2024-11-26 19:47:25.523715] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:34.657 [2024-11-26 19:47:25.524116] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.916 [2024-11-26 19:47:25.688031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.916 [2024-11-26 19:47:25.810239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.174 [2024-11-26 19:47:25.963595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:35.174 [2024-11-26 19:47:25.963647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.741 [2024-11-26 19:47:26.453297] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:35.741 [2024-11-26 19:47:26.453378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:35.741 [2024-11-26 19:47:26.453390] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:35.741 [2024-11-26 19:47:26.453400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:35.741 [2024-11-26 19:47:26.453406] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:35.741 [2024-11-26 19:47:26.453415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:35.741 "name": "Existed_Raid", 00:06:35.741 "uuid": "119c0951-4d7e-458a-8fc8-d903ef71d2ae", 00:06:35.741 "strip_size_kb": 64, 00:06:35.741 "state": "configuring", 00:06:35.741 "raid_level": "raid0", 00:06:35.741 "superblock": true, 00:06:35.741 "num_base_bdevs": 3, 00:06:35.741 "num_base_bdevs_discovered": 0, 00:06:35.741 "num_base_bdevs_operational": 3, 00:06:35.741 "base_bdevs_list": [ 00:06:35.741 { 00:06:35.741 "name": "BaseBdev1", 00:06:35.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:35.741 "is_configured": false, 00:06:35.741 "data_offset": 0, 00:06:35.741 "data_size": 0 00:06:35.741 }, 00:06:35.741 { 00:06:35.741 "name": "BaseBdev2", 00:06:35.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:35.741 "is_configured": false, 00:06:35.741 "data_offset": 0, 00:06:35.741 "data_size": 0 00:06:35.741 }, 00:06:35.741 { 00:06:35.741 "name": "BaseBdev3", 00:06:35.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:35.741 "is_configured": false, 00:06:35.741 "data_offset": 0, 00:06:35.741 "data_size": 0 00:06:35.741 } 00:06:35.741 ] 00:06:35.741 }' 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:35.741 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.000 [2024-11-26 19:47:26.753273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:36.000 [2024-11-26 19:47:26.753317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.000 [2024-11-26 19:47:26.761275] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:36.000 [2024-11-26 19:47:26.761324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:36.000 [2024-11-26 19:47:26.761334] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:36.000 [2024-11-26 19:47:26.761360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:36.000 [2024-11-26 19:47:26.761367] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:36.000 [2024-11-26 19:47:26.761376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.000 [2024-11-26 19:47:26.796881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:36.000 BaseBdev1 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.000 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.000 [ 00:06:36.000 { 00:06:36.000 "name": "BaseBdev1", 00:06:36.000 "aliases": [ 00:06:36.000 "fcb386d3-125a-40c9-b451-ec91a810aeb9" 00:06:36.000 ], 00:06:36.000 "product_name": "Malloc disk", 00:06:36.000 "block_size": 512, 00:06:36.000 "num_blocks": 65536, 00:06:36.000 "uuid": "fcb386d3-125a-40c9-b451-ec91a810aeb9", 00:06:36.000 "assigned_rate_limits": { 00:06:36.000 "rw_ios_per_sec": 0, 00:06:36.000 "rw_mbytes_per_sec": 0, 00:06:36.000 "r_mbytes_per_sec": 0, 00:06:36.000 "w_mbytes_per_sec": 0 00:06:36.000 }, 00:06:36.000 "claimed": true, 00:06:36.000 "claim_type": "exclusive_write", 00:06:36.000 "zoned": false, 00:06:36.000 "supported_io_types": { 00:06:36.000 "read": true, 00:06:36.000 "write": true, 00:06:36.000 "unmap": true, 00:06:36.000 "flush": true, 00:06:36.000 "reset": true, 00:06:36.000 "nvme_admin": false, 00:06:36.000 "nvme_io": false, 00:06:36.000 "nvme_io_md": false, 00:06:36.000 "write_zeroes": true, 00:06:36.000 "zcopy": true, 00:06:36.000 "get_zone_info": false, 00:06:36.000 "zone_management": false, 00:06:36.000 "zone_append": false, 00:06:36.000 "compare": false, 00:06:36.000 "compare_and_write": false, 00:06:36.000 "abort": true, 00:06:36.000 "seek_hole": false, 00:06:36.000 "seek_data": false, 00:06:36.000 "copy": true, 00:06:36.000 "nvme_iov_md": false 00:06:36.000 }, 00:06:36.000 "memory_domains": [ 00:06:36.000 { 00:06:36.000 "dma_device_id": "system", 00:06:36.000 "dma_device_type": 1 00:06:36.000 }, 00:06:36.000 { 00:06:36.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.001 "dma_device_type": 2 00:06:36.001 } 00:06:36.001 ], 00:06:36.001 "driver_specific": {} 00:06:36.001 } 00:06:36.001 ] 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:36.001 "name": "Existed_Raid", 00:06:36.001 "uuid": "727a47a0-aebf-4976-a1c7-9911e84006a3", 00:06:36.001 "strip_size_kb": 64, 00:06:36.001 "state": "configuring", 00:06:36.001 "raid_level": "raid0", 00:06:36.001 "superblock": true, 00:06:36.001 "num_base_bdevs": 3, 00:06:36.001 "num_base_bdevs_discovered": 1, 00:06:36.001 "num_base_bdevs_operational": 3, 00:06:36.001 "base_bdevs_list": [ 00:06:36.001 { 00:06:36.001 "name": "BaseBdev1", 00:06:36.001 "uuid": "fcb386d3-125a-40c9-b451-ec91a810aeb9", 00:06:36.001 "is_configured": true, 00:06:36.001 "data_offset": 2048, 00:06:36.001 "data_size": 63488 00:06:36.001 }, 00:06:36.001 { 00:06:36.001 "name": "BaseBdev2", 00:06:36.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:36.001 "is_configured": false, 00:06:36.001 "data_offset": 0, 00:06:36.001 "data_size": 0 00:06:36.001 }, 00:06:36.001 { 00:06:36.001 "name": "BaseBdev3", 00:06:36.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:36.001 "is_configured": false, 00:06:36.001 "data_offset": 0, 00:06:36.001 "data_size": 0 00:06:36.001 } 00:06:36.001 ] 00:06:36.001 }' 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:36.001 19:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.259 [2024-11-26 19:47:27.173030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:36.259 [2024-11-26 19:47:27.173241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.259 [2024-11-26 19:47:27.181086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:36.259 [2024-11-26 19:47:27.183250] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:36.259 [2024-11-26 19:47:27.183397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:36.259 [2024-11-26 19:47:27.183462] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:36.259 [2024-11-26 19:47:27.183490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:36.259 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.517 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.517 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:36.517 "name": "Existed_Raid", 00:06:36.517 "uuid": "7dd94c0d-dffc-4edb-8483-e3184955be0f", 00:06:36.517 "strip_size_kb": 64, 00:06:36.517 "state": "configuring", 00:06:36.517 "raid_level": "raid0", 00:06:36.517 "superblock": true, 00:06:36.517 "num_base_bdevs": 3, 00:06:36.517 "num_base_bdevs_discovered": 1, 00:06:36.517 "num_base_bdevs_operational": 3, 00:06:36.517 "base_bdevs_list": [ 00:06:36.517 { 00:06:36.517 "name": "BaseBdev1", 00:06:36.517 "uuid": "fcb386d3-125a-40c9-b451-ec91a810aeb9", 00:06:36.517 "is_configured": true, 00:06:36.517 "data_offset": 2048, 00:06:36.517 "data_size": 63488 00:06:36.517 }, 00:06:36.517 { 00:06:36.517 "name": "BaseBdev2", 00:06:36.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:36.517 "is_configured": false, 00:06:36.517 "data_offset": 0, 00:06:36.517 "data_size": 0 00:06:36.517 }, 00:06:36.517 { 00:06:36.517 "name": "BaseBdev3", 00:06:36.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:36.517 "is_configured": false, 00:06:36.517 "data_offset": 0, 00:06:36.517 "data_size": 0 00:06:36.517 } 00:06:36.517 ] 00:06:36.517 }' 00:06:36.517 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:36.517 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.776 [2024-11-26 19:47:27.526170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:36.776 BaseBdev2 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.776 [ 00:06:36.776 { 00:06:36.776 "name": "BaseBdev2", 00:06:36.776 "aliases": [ 00:06:36.776 "79651bdb-2aac-4bf1-951b-01c6bc6f91ab" 00:06:36.776 ], 00:06:36.776 "product_name": "Malloc disk", 00:06:36.776 "block_size": 512, 00:06:36.776 "num_blocks": 65536, 00:06:36.776 "uuid": "79651bdb-2aac-4bf1-951b-01c6bc6f91ab", 00:06:36.776 "assigned_rate_limits": { 00:06:36.776 "rw_ios_per_sec": 0, 00:06:36.776 "rw_mbytes_per_sec": 0, 00:06:36.776 "r_mbytes_per_sec": 0, 00:06:36.776 "w_mbytes_per_sec": 0 00:06:36.776 }, 00:06:36.776 "claimed": true, 00:06:36.776 "claim_type": "exclusive_write", 00:06:36.776 "zoned": false, 00:06:36.776 "supported_io_types": { 00:06:36.776 "read": true, 00:06:36.776 "write": true, 00:06:36.776 "unmap": true, 00:06:36.776 "flush": true, 00:06:36.776 "reset": true, 00:06:36.776 "nvme_admin": false, 00:06:36.776 "nvme_io": false, 00:06:36.776 "nvme_io_md": false, 00:06:36.776 "write_zeroes": true, 00:06:36.776 "zcopy": true, 00:06:36.776 "get_zone_info": false, 00:06:36.776 "zone_management": false, 00:06:36.776 "zone_append": false, 00:06:36.776 "compare": false, 00:06:36.776 "compare_and_write": false, 00:06:36.776 "abort": true, 00:06:36.776 "seek_hole": false, 00:06:36.776 "seek_data": false, 00:06:36.776 "copy": true, 00:06:36.776 "nvme_iov_md": false 00:06:36.776 }, 00:06:36.776 "memory_domains": [ 00:06:36.776 { 00:06:36.776 "dma_device_id": "system", 00:06:36.776 "dma_device_type": 1 00:06:36.776 }, 00:06:36.776 { 00:06:36.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.776 "dma_device_type": 2 00:06:36.776 } 00:06:36.776 ], 00:06:36.776 "driver_specific": {} 00:06:36.776 } 00:06:36.776 ] 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:36.776 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:36.777 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:36.777 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:36.777 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:36.777 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:36.777 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:36.777 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:36.777 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.777 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.777 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.777 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:36.777 "name": "Existed_Raid", 00:06:36.777 "uuid": "7dd94c0d-dffc-4edb-8483-e3184955be0f", 00:06:36.777 "strip_size_kb": 64, 00:06:36.777 "state": "configuring", 00:06:36.777 "raid_level": "raid0", 00:06:36.777 "superblock": true, 00:06:36.777 "num_base_bdevs": 3, 00:06:36.777 "num_base_bdevs_discovered": 2, 00:06:36.777 "num_base_bdevs_operational": 3, 00:06:36.777 "base_bdevs_list": [ 00:06:36.777 { 00:06:36.777 "name": "BaseBdev1", 00:06:36.777 "uuid": "fcb386d3-125a-40c9-b451-ec91a810aeb9", 00:06:36.777 "is_configured": true, 00:06:36.777 "data_offset": 2048, 00:06:36.777 "data_size": 63488 00:06:36.777 }, 00:06:36.777 { 00:06:36.777 "name": "BaseBdev2", 00:06:36.777 "uuid": "79651bdb-2aac-4bf1-951b-01c6bc6f91ab", 00:06:36.777 "is_configured": true, 00:06:36.777 "data_offset": 2048, 00:06:36.777 "data_size": 63488 00:06:36.777 }, 00:06:36.777 { 00:06:36.777 "name": "BaseBdev3", 00:06:36.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:36.777 "is_configured": false, 00:06:36.777 "data_offset": 0, 00:06:36.777 "data_size": 0 00:06:36.777 } 00:06:36.777 ] 00:06:36.777 }' 00:06:36.777 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:36.777 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.036 [2024-11-26 19:47:27.878406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:37.036 [2024-11-26 19:47:27.878675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:37.036 [2024-11-26 19:47:27.878695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:06:37.036 [2024-11-26 19:47:27.879012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:37.036 [2024-11-26 19:47:27.879219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:37.036 [2024-11-26 19:47:27.879230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:37.036 [2024-11-26 19:47:27.879393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.036 BaseBdev3 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.036 [ 00:06:37.036 { 00:06:37.036 "name": "BaseBdev3", 00:06:37.036 "aliases": [ 00:06:37.036 "7d44bd8c-98c2-4df4-9d91-b9b1d76836d9" 00:06:37.036 ], 00:06:37.036 "product_name": "Malloc disk", 00:06:37.036 "block_size": 512, 00:06:37.036 "num_blocks": 65536, 00:06:37.036 "uuid": "7d44bd8c-98c2-4df4-9d91-b9b1d76836d9", 00:06:37.036 "assigned_rate_limits": { 00:06:37.036 "rw_ios_per_sec": 0, 00:06:37.036 "rw_mbytes_per_sec": 0, 00:06:37.036 "r_mbytes_per_sec": 0, 00:06:37.036 "w_mbytes_per_sec": 0 00:06:37.036 }, 00:06:37.036 "claimed": true, 00:06:37.036 "claim_type": "exclusive_write", 00:06:37.036 "zoned": false, 00:06:37.036 "supported_io_types": { 00:06:37.036 "read": true, 00:06:37.036 "write": true, 00:06:37.036 "unmap": true, 00:06:37.036 "flush": true, 00:06:37.036 "reset": true, 00:06:37.036 "nvme_admin": false, 00:06:37.036 "nvme_io": false, 00:06:37.036 "nvme_io_md": false, 00:06:37.036 "write_zeroes": true, 00:06:37.036 "zcopy": true, 00:06:37.036 "get_zone_info": false, 00:06:37.036 "zone_management": false, 00:06:37.036 "zone_append": false, 00:06:37.036 "compare": false, 00:06:37.036 "compare_and_write": false, 00:06:37.036 "abort": true, 00:06:37.036 "seek_hole": false, 00:06:37.036 "seek_data": false, 00:06:37.036 "copy": true, 00:06:37.036 "nvme_iov_md": false 00:06:37.036 }, 00:06:37.036 "memory_domains": [ 00:06:37.036 { 00:06:37.036 "dma_device_id": "system", 00:06:37.036 "dma_device_type": 1 00:06:37.036 }, 00:06:37.036 { 00:06:37.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.036 "dma_device_type": 2 00:06:37.036 } 00:06:37.036 ], 00:06:37.036 "driver_specific": {} 00:06:37.036 } 00:06:37.036 ] 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.036 "name": "Existed_Raid", 00:06:37.036 "uuid": "7dd94c0d-dffc-4edb-8483-e3184955be0f", 00:06:37.036 "strip_size_kb": 64, 00:06:37.036 "state": "online", 00:06:37.036 "raid_level": "raid0", 00:06:37.036 "superblock": true, 00:06:37.036 "num_base_bdevs": 3, 00:06:37.036 "num_base_bdevs_discovered": 3, 00:06:37.036 "num_base_bdevs_operational": 3, 00:06:37.036 "base_bdevs_list": [ 00:06:37.036 { 00:06:37.036 "name": "BaseBdev1", 00:06:37.036 "uuid": "fcb386d3-125a-40c9-b451-ec91a810aeb9", 00:06:37.036 "is_configured": true, 00:06:37.036 "data_offset": 2048, 00:06:37.036 "data_size": 63488 00:06:37.036 }, 00:06:37.036 { 00:06:37.036 "name": "BaseBdev2", 00:06:37.036 "uuid": "79651bdb-2aac-4bf1-951b-01c6bc6f91ab", 00:06:37.036 "is_configured": true, 00:06:37.036 "data_offset": 2048, 00:06:37.036 "data_size": 63488 00:06:37.036 }, 00:06:37.036 { 00:06:37.036 "name": "BaseBdev3", 00:06:37.036 "uuid": "7d44bd8c-98c2-4df4-9d91-b9b1d76836d9", 00:06:37.036 "is_configured": true, 00:06:37.036 "data_offset": 2048, 00:06:37.036 "data_size": 63488 00:06:37.036 } 00:06:37.036 ] 00:06:37.036 }' 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.036 19:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.604 [2024-11-26 19:47:28.246960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:37.604 "name": "Existed_Raid", 00:06:37.604 "aliases": [ 00:06:37.604 "7dd94c0d-dffc-4edb-8483-e3184955be0f" 00:06:37.604 ], 00:06:37.604 "product_name": "Raid Volume", 00:06:37.604 "block_size": 512, 00:06:37.604 "num_blocks": 190464, 00:06:37.604 "uuid": "7dd94c0d-dffc-4edb-8483-e3184955be0f", 00:06:37.604 "assigned_rate_limits": { 00:06:37.604 "rw_ios_per_sec": 0, 00:06:37.604 "rw_mbytes_per_sec": 0, 00:06:37.604 "r_mbytes_per_sec": 0, 00:06:37.604 "w_mbytes_per_sec": 0 00:06:37.604 }, 00:06:37.604 "claimed": false, 00:06:37.604 "zoned": false, 00:06:37.604 "supported_io_types": { 00:06:37.604 "read": true, 00:06:37.604 "write": true, 00:06:37.604 "unmap": true, 00:06:37.604 "flush": true, 00:06:37.604 "reset": true, 00:06:37.604 "nvme_admin": false, 00:06:37.604 "nvme_io": false, 00:06:37.604 "nvme_io_md": false, 00:06:37.604 "write_zeroes": true, 00:06:37.604 "zcopy": false, 00:06:37.604 "get_zone_info": false, 00:06:37.604 "zone_management": false, 00:06:37.604 "zone_append": false, 00:06:37.604 "compare": false, 00:06:37.604 "compare_and_write": false, 00:06:37.604 "abort": false, 00:06:37.604 "seek_hole": false, 00:06:37.604 "seek_data": false, 00:06:37.604 "copy": false, 00:06:37.604 "nvme_iov_md": false 00:06:37.604 }, 00:06:37.604 "memory_domains": [ 00:06:37.604 { 00:06:37.604 "dma_device_id": "system", 00:06:37.604 "dma_device_type": 1 00:06:37.604 }, 00:06:37.604 { 00:06:37.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.604 "dma_device_type": 2 00:06:37.604 }, 00:06:37.604 { 00:06:37.604 "dma_device_id": "system", 00:06:37.604 "dma_device_type": 1 00:06:37.604 }, 00:06:37.604 { 00:06:37.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.604 "dma_device_type": 2 00:06:37.604 }, 00:06:37.604 { 00:06:37.604 "dma_device_id": "system", 00:06:37.604 "dma_device_type": 1 00:06:37.604 }, 00:06:37.604 { 00:06:37.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.604 "dma_device_type": 2 00:06:37.604 } 00:06:37.604 ], 00:06:37.604 "driver_specific": { 00:06:37.604 "raid": { 00:06:37.604 "uuid": "7dd94c0d-dffc-4edb-8483-e3184955be0f", 00:06:37.604 "strip_size_kb": 64, 00:06:37.604 "state": "online", 00:06:37.604 "raid_level": "raid0", 00:06:37.604 "superblock": true, 00:06:37.604 "num_base_bdevs": 3, 00:06:37.604 "num_base_bdevs_discovered": 3, 00:06:37.604 "num_base_bdevs_operational": 3, 00:06:37.604 "base_bdevs_list": [ 00:06:37.604 { 00:06:37.604 "name": "BaseBdev1", 00:06:37.604 "uuid": "fcb386d3-125a-40c9-b451-ec91a810aeb9", 00:06:37.604 "is_configured": true, 00:06:37.604 "data_offset": 2048, 00:06:37.604 "data_size": 63488 00:06:37.604 }, 00:06:37.604 { 00:06:37.604 "name": "BaseBdev2", 00:06:37.604 "uuid": "79651bdb-2aac-4bf1-951b-01c6bc6f91ab", 00:06:37.604 "is_configured": true, 00:06:37.604 "data_offset": 2048, 00:06:37.604 "data_size": 63488 00:06:37.604 }, 00:06:37.604 { 00:06:37.604 "name": "BaseBdev3", 00:06:37.604 "uuid": "7d44bd8c-98c2-4df4-9d91-b9b1d76836d9", 00:06:37.604 "is_configured": true, 00:06:37.604 "data_offset": 2048, 00:06:37.604 "data_size": 63488 00:06:37.604 } 00:06:37.604 ] 00:06:37.604 } 00:06:37.604 } 00:06:37.604 }' 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:37.604 BaseBdev2 00:06:37.604 BaseBdev3' 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.604 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.604 [2024-11-26 19:47:28.438688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:37.605 [2024-11-26 19:47:28.438720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:37.605 [2024-11-26 19:47:28.438779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.605 "name": "Existed_Raid", 00:06:37.605 "uuid": "7dd94c0d-dffc-4edb-8483-e3184955be0f", 00:06:37.605 "strip_size_kb": 64, 00:06:37.605 "state": "offline", 00:06:37.605 "raid_level": "raid0", 00:06:37.605 "superblock": true, 00:06:37.605 "num_base_bdevs": 3, 00:06:37.605 "num_base_bdevs_discovered": 2, 00:06:37.605 "num_base_bdevs_operational": 2, 00:06:37.605 "base_bdevs_list": [ 00:06:37.605 { 00:06:37.605 "name": null, 00:06:37.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.605 "is_configured": false, 00:06:37.605 "data_offset": 0, 00:06:37.605 "data_size": 63488 00:06:37.605 }, 00:06:37.605 { 00:06:37.605 "name": "BaseBdev2", 00:06:37.605 "uuid": "79651bdb-2aac-4bf1-951b-01c6bc6f91ab", 00:06:37.605 "is_configured": true, 00:06:37.605 "data_offset": 2048, 00:06:37.605 "data_size": 63488 00:06:37.605 }, 00:06:37.605 { 00:06:37.605 "name": "BaseBdev3", 00:06:37.605 "uuid": "7d44bd8c-98c2-4df4-9d91-b9b1d76836d9", 00:06:37.605 "is_configured": true, 00:06:37.605 "data_offset": 2048, 00:06:37.605 "data_size": 63488 00:06:37.605 } 00:06:37.605 ] 00:06:37.605 }' 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.605 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.865 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:37.866 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:37.866 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.866 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:37.866 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.866 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.125 [2024-11-26 19:47:28.834412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.125 [2024-11-26 19:47:28.924711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:06:38.125 [2024-11-26 19:47:28.924762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.125 19:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.126 BaseBdev2 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.126 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.383 [ 00:06:38.383 { 00:06:38.383 "name": "BaseBdev2", 00:06:38.383 "aliases": [ 00:06:38.383 "fac80132-53e2-4ee4-aa69-5de69b360c35" 00:06:38.383 ], 00:06:38.383 "product_name": "Malloc disk", 00:06:38.383 "block_size": 512, 00:06:38.383 "num_blocks": 65536, 00:06:38.383 "uuid": "fac80132-53e2-4ee4-aa69-5de69b360c35", 00:06:38.383 "assigned_rate_limits": { 00:06:38.383 "rw_ios_per_sec": 0, 00:06:38.383 "rw_mbytes_per_sec": 0, 00:06:38.383 "r_mbytes_per_sec": 0, 00:06:38.383 "w_mbytes_per_sec": 0 00:06:38.383 }, 00:06:38.383 "claimed": false, 00:06:38.383 "zoned": false, 00:06:38.383 "supported_io_types": { 00:06:38.383 "read": true, 00:06:38.383 "write": true, 00:06:38.383 "unmap": true, 00:06:38.383 "flush": true, 00:06:38.383 "reset": true, 00:06:38.383 "nvme_admin": false, 00:06:38.383 "nvme_io": false, 00:06:38.383 "nvme_io_md": false, 00:06:38.383 "write_zeroes": true, 00:06:38.383 "zcopy": true, 00:06:38.383 "get_zone_info": false, 00:06:38.383 "zone_management": false, 00:06:38.383 "zone_append": false, 00:06:38.383 "compare": false, 00:06:38.383 "compare_and_write": false, 00:06:38.383 "abort": true, 00:06:38.383 "seek_hole": false, 00:06:38.383 "seek_data": false, 00:06:38.383 "copy": true, 00:06:38.383 "nvme_iov_md": false 00:06:38.383 }, 00:06:38.383 "memory_domains": [ 00:06:38.383 { 00:06:38.383 "dma_device_id": "system", 00:06:38.383 "dma_device_type": 1 00:06:38.383 }, 00:06:38.383 { 00:06:38.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.383 "dma_device_type": 2 00:06:38.383 } 00:06:38.383 ], 00:06:38.383 "driver_specific": {} 00:06:38.383 } 00:06:38.383 ] 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.383 BaseBdev3 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.383 [ 00:06:38.383 { 00:06:38.383 "name": "BaseBdev3", 00:06:38.383 "aliases": [ 00:06:38.383 "11e77210-4776-4a03-adfc-884b59773c53" 00:06:38.383 ], 00:06:38.383 "product_name": "Malloc disk", 00:06:38.383 "block_size": 512, 00:06:38.383 "num_blocks": 65536, 00:06:38.383 "uuid": "11e77210-4776-4a03-adfc-884b59773c53", 00:06:38.383 "assigned_rate_limits": { 00:06:38.383 "rw_ios_per_sec": 0, 00:06:38.383 "rw_mbytes_per_sec": 0, 00:06:38.383 "r_mbytes_per_sec": 0, 00:06:38.383 "w_mbytes_per_sec": 0 00:06:38.383 }, 00:06:38.383 "claimed": false, 00:06:38.383 "zoned": false, 00:06:38.383 "supported_io_types": { 00:06:38.383 "read": true, 00:06:38.383 "write": true, 00:06:38.383 "unmap": true, 00:06:38.383 "flush": true, 00:06:38.383 "reset": true, 00:06:38.383 "nvme_admin": false, 00:06:38.383 "nvme_io": false, 00:06:38.383 "nvme_io_md": false, 00:06:38.383 "write_zeroes": true, 00:06:38.383 "zcopy": true, 00:06:38.383 "get_zone_info": false, 00:06:38.383 "zone_management": false, 00:06:38.383 "zone_append": false, 00:06:38.383 "compare": false, 00:06:38.383 "compare_and_write": false, 00:06:38.383 "abort": true, 00:06:38.383 "seek_hole": false, 00:06:38.383 "seek_data": false, 00:06:38.383 "copy": true, 00:06:38.383 "nvme_iov_md": false 00:06:38.383 }, 00:06:38.383 "memory_domains": [ 00:06:38.383 { 00:06:38.383 "dma_device_id": "system", 00:06:38.383 "dma_device_type": 1 00:06:38.383 }, 00:06:38.383 { 00:06:38.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.383 "dma_device_type": 2 00:06:38.383 } 00:06:38.383 ], 00:06:38.383 "driver_specific": {} 00:06:38.383 } 00:06:38.383 ] 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.383 [2024-11-26 19:47:29.124736] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:38.383 [2024-11-26 19:47:29.124919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:38.383 [2024-11-26 19:47:29.125409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:38.383 [2024-11-26 19:47:29.127285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.383 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.383 "name": "Existed_Raid", 00:06:38.383 "uuid": "4c81b36c-c46f-4ebf-a5ac-e2f8c52e5277", 00:06:38.383 "strip_size_kb": 64, 00:06:38.383 "state": "configuring", 00:06:38.383 "raid_level": "raid0", 00:06:38.383 "superblock": true, 00:06:38.383 "num_base_bdevs": 3, 00:06:38.384 "num_base_bdevs_discovered": 2, 00:06:38.384 "num_base_bdevs_operational": 3, 00:06:38.384 "base_bdevs_list": [ 00:06:38.384 { 00:06:38.384 "name": "BaseBdev1", 00:06:38.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:38.384 "is_configured": false, 00:06:38.384 "data_offset": 0, 00:06:38.384 "data_size": 0 00:06:38.384 }, 00:06:38.384 { 00:06:38.384 "name": "BaseBdev2", 00:06:38.384 "uuid": "fac80132-53e2-4ee4-aa69-5de69b360c35", 00:06:38.384 "is_configured": true, 00:06:38.384 "data_offset": 2048, 00:06:38.384 "data_size": 63488 00:06:38.384 }, 00:06:38.384 { 00:06:38.384 "name": "BaseBdev3", 00:06:38.384 "uuid": "11e77210-4776-4a03-adfc-884b59773c53", 00:06:38.384 "is_configured": true, 00:06:38.384 "data_offset": 2048, 00:06:38.384 "data_size": 63488 00:06:38.384 } 00:06:38.384 ] 00:06:38.384 }' 00:06:38.384 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.384 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.641 [2024-11-26 19:47:29.432817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.641 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.641 "name": "Existed_Raid", 00:06:38.641 "uuid": "4c81b36c-c46f-4ebf-a5ac-e2f8c52e5277", 00:06:38.641 "strip_size_kb": 64, 00:06:38.641 "state": "configuring", 00:06:38.641 "raid_level": "raid0", 00:06:38.641 "superblock": true, 00:06:38.641 "num_base_bdevs": 3, 00:06:38.641 "num_base_bdevs_discovered": 1, 00:06:38.641 "num_base_bdevs_operational": 3, 00:06:38.641 "base_bdevs_list": [ 00:06:38.641 { 00:06:38.641 "name": "BaseBdev1", 00:06:38.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:38.641 "is_configured": false, 00:06:38.641 "data_offset": 0, 00:06:38.641 "data_size": 0 00:06:38.641 }, 00:06:38.641 { 00:06:38.641 "name": null, 00:06:38.641 "uuid": "fac80132-53e2-4ee4-aa69-5de69b360c35", 00:06:38.642 "is_configured": false, 00:06:38.642 "data_offset": 0, 00:06:38.642 "data_size": 63488 00:06:38.642 }, 00:06:38.642 { 00:06:38.642 "name": "BaseBdev3", 00:06:38.642 "uuid": "11e77210-4776-4a03-adfc-884b59773c53", 00:06:38.642 "is_configured": true, 00:06:38.642 "data_offset": 2048, 00:06:38.642 "data_size": 63488 00:06:38.642 } 00:06:38.642 ] 00:06:38.642 }' 00:06:38.642 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.642 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.899 [2024-11-26 19:47:29.809537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:38.899 BaseBdev1 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.899 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.899 [ 00:06:38.899 { 00:06:38.899 "name": "BaseBdev1", 00:06:38.899 "aliases": [ 00:06:38.899 "6034028e-df8b-44ef-a799-d4d7a580844f" 00:06:38.899 ], 00:06:38.899 "product_name": "Malloc disk", 00:06:38.899 "block_size": 512, 00:06:38.899 "num_blocks": 65536, 00:06:38.899 "uuid": "6034028e-df8b-44ef-a799-d4d7a580844f", 00:06:38.899 "assigned_rate_limits": { 00:06:38.899 "rw_ios_per_sec": 0, 00:06:38.899 "rw_mbytes_per_sec": 0, 00:06:38.899 "r_mbytes_per_sec": 0, 00:06:38.899 "w_mbytes_per_sec": 0 00:06:38.899 }, 00:06:38.899 "claimed": true, 00:06:38.899 "claim_type": "exclusive_write", 00:06:38.899 "zoned": false, 00:06:38.899 "supported_io_types": { 00:06:38.899 "read": true, 00:06:38.899 "write": true, 00:06:38.899 "unmap": true, 00:06:38.899 "flush": true, 00:06:38.899 "reset": true, 00:06:38.899 "nvme_admin": false, 00:06:38.899 "nvme_io": false, 00:06:38.899 "nvme_io_md": false, 00:06:38.899 "write_zeroes": true, 00:06:38.899 "zcopy": true, 00:06:38.899 "get_zone_info": false, 00:06:38.899 "zone_management": false, 00:06:38.899 "zone_append": false, 00:06:38.899 "compare": false, 00:06:38.899 "compare_and_write": false, 00:06:38.899 "abort": true, 00:06:38.899 "seek_hole": false, 00:06:38.899 "seek_data": false, 00:06:38.899 "copy": true, 00:06:38.899 "nvme_iov_md": false 00:06:38.899 }, 00:06:38.899 "memory_domains": [ 00:06:39.157 { 00:06:39.157 "dma_device_id": "system", 00:06:39.158 "dma_device_type": 1 00:06:39.158 }, 00:06:39.158 { 00:06:39.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:39.158 "dma_device_type": 2 00:06:39.158 } 00:06:39.158 ], 00:06:39.158 "driver_specific": {} 00:06:39.158 } 00:06:39.158 ] 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:39.158 "name": "Existed_Raid", 00:06:39.158 "uuid": "4c81b36c-c46f-4ebf-a5ac-e2f8c52e5277", 00:06:39.158 "strip_size_kb": 64, 00:06:39.158 "state": "configuring", 00:06:39.158 "raid_level": "raid0", 00:06:39.158 "superblock": true, 00:06:39.158 "num_base_bdevs": 3, 00:06:39.158 "num_base_bdevs_discovered": 2, 00:06:39.158 "num_base_bdevs_operational": 3, 00:06:39.158 "base_bdevs_list": [ 00:06:39.158 { 00:06:39.158 "name": "BaseBdev1", 00:06:39.158 "uuid": "6034028e-df8b-44ef-a799-d4d7a580844f", 00:06:39.158 "is_configured": true, 00:06:39.158 "data_offset": 2048, 00:06:39.158 "data_size": 63488 00:06:39.158 }, 00:06:39.158 { 00:06:39.158 "name": null, 00:06:39.158 "uuid": "fac80132-53e2-4ee4-aa69-5de69b360c35", 00:06:39.158 "is_configured": false, 00:06:39.158 "data_offset": 0, 00:06:39.158 "data_size": 63488 00:06:39.158 }, 00:06:39.158 { 00:06:39.158 "name": "BaseBdev3", 00:06:39.158 "uuid": "11e77210-4776-4a03-adfc-884b59773c53", 00:06:39.158 "is_configured": true, 00:06:39.158 "data_offset": 2048, 00:06:39.158 "data_size": 63488 00:06:39.158 } 00:06:39.158 ] 00:06:39.158 }' 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:39.158 19:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.416 [2024-11-26 19:47:30.173666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.416 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:39.416 "name": "Existed_Raid", 00:06:39.416 "uuid": "4c81b36c-c46f-4ebf-a5ac-e2f8c52e5277", 00:06:39.416 "strip_size_kb": 64, 00:06:39.416 "state": "configuring", 00:06:39.416 "raid_level": "raid0", 00:06:39.416 "superblock": true, 00:06:39.416 "num_base_bdevs": 3, 00:06:39.416 "num_base_bdevs_discovered": 1, 00:06:39.416 "num_base_bdevs_operational": 3, 00:06:39.416 "base_bdevs_list": [ 00:06:39.416 { 00:06:39.416 "name": "BaseBdev1", 00:06:39.416 "uuid": "6034028e-df8b-44ef-a799-d4d7a580844f", 00:06:39.416 "is_configured": true, 00:06:39.416 "data_offset": 2048, 00:06:39.416 "data_size": 63488 00:06:39.416 }, 00:06:39.416 { 00:06:39.416 "name": null, 00:06:39.416 "uuid": "fac80132-53e2-4ee4-aa69-5de69b360c35", 00:06:39.416 "is_configured": false, 00:06:39.416 "data_offset": 0, 00:06:39.416 "data_size": 63488 00:06:39.416 }, 00:06:39.417 { 00:06:39.417 "name": null, 00:06:39.417 "uuid": "11e77210-4776-4a03-adfc-884b59773c53", 00:06:39.417 "is_configured": false, 00:06:39.417 "data_offset": 0, 00:06:39.417 "data_size": 63488 00:06:39.417 } 00:06:39.417 ] 00:06:39.417 }' 00:06:39.417 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:39.417 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.674 [2024-11-26 19:47:30.513780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:39.674 "name": "Existed_Raid", 00:06:39.674 "uuid": "4c81b36c-c46f-4ebf-a5ac-e2f8c52e5277", 00:06:39.674 "strip_size_kb": 64, 00:06:39.674 "state": "configuring", 00:06:39.674 "raid_level": "raid0", 00:06:39.674 "superblock": true, 00:06:39.674 "num_base_bdevs": 3, 00:06:39.674 "num_base_bdevs_discovered": 2, 00:06:39.674 "num_base_bdevs_operational": 3, 00:06:39.674 "base_bdevs_list": [ 00:06:39.674 { 00:06:39.674 "name": "BaseBdev1", 00:06:39.674 "uuid": "6034028e-df8b-44ef-a799-d4d7a580844f", 00:06:39.674 "is_configured": true, 00:06:39.674 "data_offset": 2048, 00:06:39.674 "data_size": 63488 00:06:39.674 }, 00:06:39.674 { 00:06:39.674 "name": null, 00:06:39.674 "uuid": "fac80132-53e2-4ee4-aa69-5de69b360c35", 00:06:39.674 "is_configured": false, 00:06:39.674 "data_offset": 0, 00:06:39.674 "data_size": 63488 00:06:39.674 }, 00:06:39.674 { 00:06:39.674 "name": "BaseBdev3", 00:06:39.674 "uuid": "11e77210-4776-4a03-adfc-884b59773c53", 00:06:39.674 "is_configured": true, 00:06:39.674 "data_offset": 2048, 00:06:39.674 "data_size": 63488 00:06:39.674 } 00:06:39.674 ] 00:06:39.674 }' 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:39.674 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.932 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:06:39.932 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.932 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.932 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.932 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.932 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:06:39.932 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:39.932 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.932 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.932 [2024-11-26 19:47:30.857839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:40.191 "name": "Existed_Raid", 00:06:40.191 "uuid": "4c81b36c-c46f-4ebf-a5ac-e2f8c52e5277", 00:06:40.191 "strip_size_kb": 64, 00:06:40.191 "state": "configuring", 00:06:40.191 "raid_level": "raid0", 00:06:40.191 "superblock": true, 00:06:40.191 "num_base_bdevs": 3, 00:06:40.191 "num_base_bdevs_discovered": 1, 00:06:40.191 "num_base_bdevs_operational": 3, 00:06:40.191 "base_bdevs_list": [ 00:06:40.191 { 00:06:40.191 "name": null, 00:06:40.191 "uuid": "6034028e-df8b-44ef-a799-d4d7a580844f", 00:06:40.191 "is_configured": false, 00:06:40.191 "data_offset": 0, 00:06:40.191 "data_size": 63488 00:06:40.191 }, 00:06:40.191 { 00:06:40.191 "name": null, 00:06:40.191 "uuid": "fac80132-53e2-4ee4-aa69-5de69b360c35", 00:06:40.191 "is_configured": false, 00:06:40.191 "data_offset": 0, 00:06:40.191 "data_size": 63488 00:06:40.191 }, 00:06:40.191 { 00:06:40.191 "name": "BaseBdev3", 00:06:40.191 "uuid": "11e77210-4776-4a03-adfc-884b59773c53", 00:06:40.191 "is_configured": true, 00:06:40.191 "data_offset": 2048, 00:06:40.191 "data_size": 63488 00:06:40.191 } 00:06:40.191 ] 00:06:40.191 }' 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:40.191 19:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.449 [2024-11-26 19:47:31.256892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.449 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:40.449 "name": "Existed_Raid", 00:06:40.449 "uuid": "4c81b36c-c46f-4ebf-a5ac-e2f8c52e5277", 00:06:40.449 "strip_size_kb": 64, 00:06:40.449 "state": "configuring", 00:06:40.449 "raid_level": "raid0", 00:06:40.449 "superblock": true, 00:06:40.449 "num_base_bdevs": 3, 00:06:40.449 "num_base_bdevs_discovered": 2, 00:06:40.449 "num_base_bdevs_operational": 3, 00:06:40.449 "base_bdevs_list": [ 00:06:40.449 { 00:06:40.449 "name": null, 00:06:40.449 "uuid": "6034028e-df8b-44ef-a799-d4d7a580844f", 00:06:40.449 "is_configured": false, 00:06:40.449 "data_offset": 0, 00:06:40.449 "data_size": 63488 00:06:40.449 }, 00:06:40.449 { 00:06:40.449 "name": "BaseBdev2", 00:06:40.449 "uuid": "fac80132-53e2-4ee4-aa69-5de69b360c35", 00:06:40.450 "is_configured": true, 00:06:40.450 "data_offset": 2048, 00:06:40.450 "data_size": 63488 00:06:40.450 }, 00:06:40.450 { 00:06:40.450 "name": "BaseBdev3", 00:06:40.450 "uuid": "11e77210-4776-4a03-adfc-884b59773c53", 00:06:40.450 "is_configured": true, 00:06:40.450 "data_offset": 2048, 00:06:40.450 "data_size": 63488 00:06:40.450 } 00:06:40.450 ] 00:06:40.450 }' 00:06:40.450 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:40.450 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6034028e-df8b-44ef-a799-d4d7a580844f 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.774 [2024-11-26 19:47:31.634014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:06:40.774 [2024-11-26 19:47:31.634222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:06:40.774 [2024-11-26 19:47:31.634237] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:06:40.774 [2024-11-26 19:47:31.634476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:40.774 NewBaseBdev 00:06:40.774 [2024-11-26 19:47:31.634591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:06:40.774 [2024-11-26 19:47:31.634598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:06:40.774 [2024-11-26 19:47:31.634708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.774 [ 00:06:40.774 { 00:06:40.774 "name": "NewBaseBdev", 00:06:40.774 "aliases": [ 00:06:40.774 "6034028e-df8b-44ef-a799-d4d7a580844f" 00:06:40.774 ], 00:06:40.774 "product_name": "Malloc disk", 00:06:40.774 "block_size": 512, 00:06:40.774 "num_blocks": 65536, 00:06:40.774 "uuid": "6034028e-df8b-44ef-a799-d4d7a580844f", 00:06:40.774 "assigned_rate_limits": { 00:06:40.774 "rw_ios_per_sec": 0, 00:06:40.774 "rw_mbytes_per_sec": 0, 00:06:40.774 "r_mbytes_per_sec": 0, 00:06:40.774 "w_mbytes_per_sec": 0 00:06:40.774 }, 00:06:40.774 "claimed": true, 00:06:40.774 "claim_type": "exclusive_write", 00:06:40.774 "zoned": false, 00:06:40.774 "supported_io_types": { 00:06:40.774 "read": true, 00:06:40.774 "write": true, 00:06:40.774 "unmap": true, 00:06:40.774 "flush": true, 00:06:40.774 "reset": true, 00:06:40.774 "nvme_admin": false, 00:06:40.774 "nvme_io": false, 00:06:40.774 "nvme_io_md": false, 00:06:40.774 "write_zeroes": true, 00:06:40.774 "zcopy": true, 00:06:40.774 "get_zone_info": false, 00:06:40.774 "zone_management": false, 00:06:40.774 "zone_append": false, 00:06:40.774 "compare": false, 00:06:40.774 "compare_and_write": false, 00:06:40.774 "abort": true, 00:06:40.774 "seek_hole": false, 00:06:40.774 "seek_data": false, 00:06:40.774 "copy": true, 00:06:40.774 "nvme_iov_md": false 00:06:40.774 }, 00:06:40.774 "memory_domains": [ 00:06:40.774 { 00:06:40.774 "dma_device_id": "system", 00:06:40.774 "dma_device_type": 1 00:06:40.774 }, 00:06:40.774 { 00:06:40.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.774 "dma_device_type": 2 00:06:40.774 } 00:06:40.774 ], 00:06:40.774 "driver_specific": {} 00:06:40.774 } 00:06:40.774 ] 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.774 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:40.774 "name": "Existed_Raid", 00:06:40.774 "uuid": "4c81b36c-c46f-4ebf-a5ac-e2f8c52e5277", 00:06:40.774 "strip_size_kb": 64, 00:06:40.774 "state": "online", 00:06:40.774 "raid_level": "raid0", 00:06:40.774 "superblock": true, 00:06:40.774 "num_base_bdevs": 3, 00:06:40.774 "num_base_bdevs_discovered": 3, 00:06:40.774 "num_base_bdevs_operational": 3, 00:06:40.774 "base_bdevs_list": [ 00:06:40.774 { 00:06:40.774 "name": "NewBaseBdev", 00:06:40.775 "uuid": "6034028e-df8b-44ef-a799-d4d7a580844f", 00:06:40.775 "is_configured": true, 00:06:40.775 "data_offset": 2048, 00:06:40.775 "data_size": 63488 00:06:40.775 }, 00:06:40.775 { 00:06:40.775 "name": "BaseBdev2", 00:06:40.775 "uuid": "fac80132-53e2-4ee4-aa69-5de69b360c35", 00:06:40.775 "is_configured": true, 00:06:40.775 "data_offset": 2048, 00:06:40.775 "data_size": 63488 00:06:40.775 }, 00:06:40.775 { 00:06:40.775 "name": "BaseBdev3", 00:06:40.775 "uuid": "11e77210-4776-4a03-adfc-884b59773c53", 00:06:40.775 "is_configured": true, 00:06:40.775 "data_offset": 2048, 00:06:40.775 "data_size": 63488 00:06:40.775 } 00:06:40.775 ] 00:06:40.775 }' 00:06:40.775 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:40.775 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.355 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:06:41.355 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:41.355 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:41.355 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:41.355 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:41.355 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:41.355 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:41.355 19:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:41.355 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.355 19:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.355 [2024-11-26 19:47:32.006452] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.355 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.355 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:41.355 "name": "Existed_Raid", 00:06:41.355 "aliases": [ 00:06:41.355 "4c81b36c-c46f-4ebf-a5ac-e2f8c52e5277" 00:06:41.355 ], 00:06:41.355 "product_name": "Raid Volume", 00:06:41.355 "block_size": 512, 00:06:41.355 "num_blocks": 190464, 00:06:41.355 "uuid": "4c81b36c-c46f-4ebf-a5ac-e2f8c52e5277", 00:06:41.356 "assigned_rate_limits": { 00:06:41.356 "rw_ios_per_sec": 0, 00:06:41.356 "rw_mbytes_per_sec": 0, 00:06:41.356 "r_mbytes_per_sec": 0, 00:06:41.356 "w_mbytes_per_sec": 0 00:06:41.356 }, 00:06:41.356 "claimed": false, 00:06:41.356 "zoned": false, 00:06:41.356 "supported_io_types": { 00:06:41.356 "read": true, 00:06:41.356 "write": true, 00:06:41.356 "unmap": true, 00:06:41.356 "flush": true, 00:06:41.356 "reset": true, 00:06:41.356 "nvme_admin": false, 00:06:41.356 "nvme_io": false, 00:06:41.356 "nvme_io_md": false, 00:06:41.356 "write_zeroes": true, 00:06:41.356 "zcopy": false, 00:06:41.356 "get_zone_info": false, 00:06:41.356 "zone_management": false, 00:06:41.356 "zone_append": false, 00:06:41.356 "compare": false, 00:06:41.356 "compare_and_write": false, 00:06:41.356 "abort": false, 00:06:41.356 "seek_hole": false, 00:06:41.356 "seek_data": false, 00:06:41.356 "copy": false, 00:06:41.356 "nvme_iov_md": false 00:06:41.356 }, 00:06:41.356 "memory_domains": [ 00:06:41.356 { 00:06:41.356 "dma_device_id": "system", 00:06:41.356 "dma_device_type": 1 00:06:41.356 }, 00:06:41.356 { 00:06:41.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.356 "dma_device_type": 2 00:06:41.356 }, 00:06:41.356 { 00:06:41.356 "dma_device_id": "system", 00:06:41.356 "dma_device_type": 1 00:06:41.356 }, 00:06:41.356 { 00:06:41.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.356 "dma_device_type": 2 00:06:41.356 }, 00:06:41.356 { 00:06:41.356 "dma_device_id": "system", 00:06:41.356 "dma_device_type": 1 00:06:41.356 }, 00:06:41.356 { 00:06:41.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.356 "dma_device_type": 2 00:06:41.356 } 00:06:41.356 ], 00:06:41.356 "driver_specific": { 00:06:41.356 "raid": { 00:06:41.356 "uuid": "4c81b36c-c46f-4ebf-a5ac-e2f8c52e5277", 00:06:41.356 "strip_size_kb": 64, 00:06:41.356 "state": "online", 00:06:41.356 "raid_level": "raid0", 00:06:41.356 "superblock": true, 00:06:41.356 "num_base_bdevs": 3, 00:06:41.356 "num_base_bdevs_discovered": 3, 00:06:41.356 "num_base_bdevs_operational": 3, 00:06:41.356 "base_bdevs_list": [ 00:06:41.356 { 00:06:41.356 "name": "NewBaseBdev", 00:06:41.356 "uuid": "6034028e-df8b-44ef-a799-d4d7a580844f", 00:06:41.356 "is_configured": true, 00:06:41.356 "data_offset": 2048, 00:06:41.356 "data_size": 63488 00:06:41.356 }, 00:06:41.356 { 00:06:41.356 "name": "BaseBdev2", 00:06:41.356 "uuid": "fac80132-53e2-4ee4-aa69-5de69b360c35", 00:06:41.356 "is_configured": true, 00:06:41.356 "data_offset": 2048, 00:06:41.356 "data_size": 63488 00:06:41.356 }, 00:06:41.356 { 00:06:41.356 "name": "BaseBdev3", 00:06:41.356 "uuid": "11e77210-4776-4a03-adfc-884b59773c53", 00:06:41.356 "is_configured": true, 00:06:41.356 "data_offset": 2048, 00:06:41.356 "data_size": 63488 00:06:41.356 } 00:06:41.356 ] 00:06:41.356 } 00:06:41.356 } 00:06:41.356 }' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:06:41.356 BaseBdev2 00:06:41.356 BaseBdev3' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.356 [2024-11-26 19:47:32.194187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:41.356 [2024-11-26 19:47:32.194221] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:41.356 [2024-11-26 19:47:32.194308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:41.356 [2024-11-26 19:47:32.194384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:41.356 [2024-11-26 19:47:32.194397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62995 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62995 ']' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62995 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62995 00:06:41.356 killing process with pid 62995 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62995' 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62995 00:06:41.356 [2024-11-26 19:47:32.224394] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:41.356 19:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62995 00:06:41.614 [2024-11-26 19:47:32.386818] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:42.180 19:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:42.180 00:06:42.180 real 0m7.574s 00:06:42.180 user 0m12.042s 00:06:42.180 sys 0m1.335s 00:06:42.180 ************************************ 00:06:42.180 END TEST raid_state_function_test_sb 00:06:42.180 ************************************ 00:06:42.180 19:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.180 19:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.180 19:47:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:06:42.180 19:47:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:42.180 19:47:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.180 19:47:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:42.180 ************************************ 00:06:42.180 START TEST raid_superblock_test 00:06:42.180 ************************************ 00:06:42.180 19:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63588 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63588 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63588 ']' 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.181 19:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.439 [2024-11-26 19:47:33.143767] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:42.439 [2024-11-26 19:47:33.143898] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63588 ] 00:06:42.439 [2024-11-26 19:47:33.298608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.697 [2024-11-26 19:47:33.404911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.697 [2024-11-26 19:47:33.531512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.697 [2024-11-26 19:47:33.531729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.263 19:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.263 19:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:43.263 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:43.263 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:43.263 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:43.263 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:43.263 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:43.263 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:43.263 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:43.263 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:43.263 19:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:43.263 19:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.263 19:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.263 malloc1 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.263 [2024-11-26 19:47:34.032138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:43.263 [2024-11-26 19:47:34.032202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.263 [2024-11-26 19:47:34.032222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:43.263 [2024-11-26 19:47:34.032231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.263 [2024-11-26 19:47:34.034234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.263 [2024-11-26 19:47:34.034268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:43.263 pt1 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.263 malloc2 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.263 [2024-11-26 19:47:34.066574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:43.263 [2024-11-26 19:47:34.066757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.263 [2024-11-26 19:47:34.066788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:43.263 [2024-11-26 19:47:34.066796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.263 [2024-11-26 19:47:34.068812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.263 [2024-11-26 19:47:34.068845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:43.263 pt2 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.263 malloc3 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.263 [2024-11-26 19:47:34.114289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:06:43.263 [2024-11-26 19:47:34.114359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.263 [2024-11-26 19:47:34.114380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:43.263 [2024-11-26 19:47:34.114389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.263 [2024-11-26 19:47:34.116444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.263 [2024-11-26 19:47:34.116476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:06:43.263 pt3 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.263 [2024-11-26 19:47:34.122352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:43.263 [2024-11-26 19:47:34.124231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:43.263 [2024-11-26 19:47:34.124394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:06:43.263 [2024-11-26 19:47:34.124645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:43.263 [2024-11-26 19:47:34.124711] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:06:43.263 [2024-11-26 19:47:34.125013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:43.263 [2024-11-26 19:47:34.125208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:43.263 [2024-11-26 19:47:34.125267] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:06:43.263 [2024-11-26 19:47:34.125467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:43.263 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:43.264 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:43.264 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:43.264 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:43.264 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:43.264 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:43.264 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.264 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:43.264 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.264 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.264 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.264 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:43.264 "name": "raid_bdev1", 00:06:43.264 "uuid": "04f15533-a0cf-4f4a-aed1-47070d42d54f", 00:06:43.264 "strip_size_kb": 64, 00:06:43.264 "state": "online", 00:06:43.264 "raid_level": "raid0", 00:06:43.264 "superblock": true, 00:06:43.264 "num_base_bdevs": 3, 00:06:43.264 "num_base_bdevs_discovered": 3, 00:06:43.264 "num_base_bdevs_operational": 3, 00:06:43.264 "base_bdevs_list": [ 00:06:43.264 { 00:06:43.264 "name": "pt1", 00:06:43.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:43.264 "is_configured": true, 00:06:43.264 "data_offset": 2048, 00:06:43.264 "data_size": 63488 00:06:43.264 }, 00:06:43.264 { 00:06:43.264 "name": "pt2", 00:06:43.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:43.264 "is_configured": true, 00:06:43.264 "data_offset": 2048, 00:06:43.264 "data_size": 63488 00:06:43.264 }, 00:06:43.264 { 00:06:43.264 "name": "pt3", 00:06:43.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:06:43.264 "is_configured": true, 00:06:43.264 "data_offset": 2048, 00:06:43.264 "data_size": 63488 00:06:43.264 } 00:06:43.264 ] 00:06:43.264 }' 00:06:43.264 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:43.264 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.522 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:43.522 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:43.522 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:43.522 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:43.522 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:43.522 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:43.522 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:43.522 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:43.522 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.522 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.522 [2024-11-26 19:47:34.434683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.522 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:43.780 "name": "raid_bdev1", 00:06:43.780 "aliases": [ 00:06:43.780 "04f15533-a0cf-4f4a-aed1-47070d42d54f" 00:06:43.780 ], 00:06:43.780 "product_name": "Raid Volume", 00:06:43.780 "block_size": 512, 00:06:43.780 "num_blocks": 190464, 00:06:43.780 "uuid": "04f15533-a0cf-4f4a-aed1-47070d42d54f", 00:06:43.780 "assigned_rate_limits": { 00:06:43.780 "rw_ios_per_sec": 0, 00:06:43.780 "rw_mbytes_per_sec": 0, 00:06:43.780 "r_mbytes_per_sec": 0, 00:06:43.780 "w_mbytes_per_sec": 0 00:06:43.780 }, 00:06:43.780 "claimed": false, 00:06:43.780 "zoned": false, 00:06:43.780 "supported_io_types": { 00:06:43.780 "read": true, 00:06:43.780 "write": true, 00:06:43.780 "unmap": true, 00:06:43.780 "flush": true, 00:06:43.780 "reset": true, 00:06:43.780 "nvme_admin": false, 00:06:43.780 "nvme_io": false, 00:06:43.780 "nvme_io_md": false, 00:06:43.780 "write_zeroes": true, 00:06:43.780 "zcopy": false, 00:06:43.780 "get_zone_info": false, 00:06:43.780 "zone_management": false, 00:06:43.780 "zone_append": false, 00:06:43.780 "compare": false, 00:06:43.780 "compare_and_write": false, 00:06:43.780 "abort": false, 00:06:43.780 "seek_hole": false, 00:06:43.780 "seek_data": false, 00:06:43.780 "copy": false, 00:06:43.780 "nvme_iov_md": false 00:06:43.780 }, 00:06:43.780 "memory_domains": [ 00:06:43.780 { 00:06:43.780 "dma_device_id": "system", 00:06:43.780 "dma_device_type": 1 00:06:43.780 }, 00:06:43.780 { 00:06:43.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.780 "dma_device_type": 2 00:06:43.780 }, 00:06:43.780 { 00:06:43.780 "dma_device_id": "system", 00:06:43.780 "dma_device_type": 1 00:06:43.780 }, 00:06:43.780 { 00:06:43.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.780 "dma_device_type": 2 00:06:43.780 }, 00:06:43.780 { 00:06:43.780 "dma_device_id": "system", 00:06:43.780 "dma_device_type": 1 00:06:43.780 }, 00:06:43.780 { 00:06:43.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.780 "dma_device_type": 2 00:06:43.780 } 00:06:43.780 ], 00:06:43.780 "driver_specific": { 00:06:43.780 "raid": { 00:06:43.780 "uuid": "04f15533-a0cf-4f4a-aed1-47070d42d54f", 00:06:43.780 "strip_size_kb": 64, 00:06:43.780 "state": "online", 00:06:43.780 "raid_level": "raid0", 00:06:43.780 "superblock": true, 00:06:43.780 "num_base_bdevs": 3, 00:06:43.780 "num_base_bdevs_discovered": 3, 00:06:43.780 "num_base_bdevs_operational": 3, 00:06:43.780 "base_bdevs_list": [ 00:06:43.780 { 00:06:43.780 "name": "pt1", 00:06:43.780 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:43.780 "is_configured": true, 00:06:43.780 "data_offset": 2048, 00:06:43.780 "data_size": 63488 00:06:43.780 }, 00:06:43.780 { 00:06:43.780 "name": "pt2", 00:06:43.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:43.780 "is_configured": true, 00:06:43.780 "data_offset": 2048, 00:06:43.780 "data_size": 63488 00:06:43.780 }, 00:06:43.780 { 00:06:43.780 "name": "pt3", 00:06:43.780 "uuid": "00000000-0000-0000-0000-000000000003", 00:06:43.780 "is_configured": true, 00:06:43.780 "data_offset": 2048, 00:06:43.780 "data_size": 63488 00:06:43.780 } 00:06:43.780 ] 00:06:43.780 } 00:06:43.780 } 00:06:43.780 }' 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:43.780 pt2 00:06:43.780 pt3' 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:43.780 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.781 [2024-11-26 19:47:34.614685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=04f15533-a0cf-4f4a-aed1-47070d42d54f 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 04f15533-a0cf-4f4a-aed1-47070d42d54f ']' 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.781 [2024-11-26 19:47:34.642404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:43.781 [2024-11-26 19:47:34.642429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:43.781 [2024-11-26 19:47:34.642504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.781 [2024-11-26 19:47:34.642573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.781 [2024-11-26 19:47:34.642582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.781 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.039 [2024-11-26 19:47:34.750646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:44.039 [2024-11-26 19:47:34.752854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:44.039 [2024-11-26 19:47:34.752905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:06:44.039 [2024-11-26 19:47:34.752956] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:44.039 [2024-11-26 19:47:34.753022] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:44.039 [2024-11-26 19:47:34.753038] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:06:44.039 [2024-11-26 19:47:34.753053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:44.039 [2024-11-26 19:47:34.753064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:06:44.039 request: 00:06:44.039 { 00:06:44.039 "name": "raid_bdev1", 00:06:44.039 "raid_level": "raid0", 00:06:44.039 "base_bdevs": [ 00:06:44.039 "malloc1", 00:06:44.039 "malloc2", 00:06:44.039 "malloc3" 00:06:44.039 ], 00:06:44.039 "strip_size_kb": 64, 00:06:44.039 "superblock": false, 00:06:44.039 "method": "bdev_raid_create", 00:06:44.039 "req_id": 1 00:06:44.039 } 00:06:44.039 Got JSON-RPC error response 00:06:44.039 response: 00:06:44.039 { 00:06:44.039 "code": -17, 00:06:44.039 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:44.039 } 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.039 [2024-11-26 19:47:34.802436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:44.039 [2024-11-26 19:47:34.802496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.039 [2024-11-26 19:47:34.802516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:44.039 [2024-11-26 19:47:34.802525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.039 [2024-11-26 19:47:34.805102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.039 [2024-11-26 19:47:34.805137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:44.039 [2024-11-26 19:47:34.805365] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:44.039 [2024-11-26 19:47:34.805413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:44.039 pt1 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.039 "name": "raid_bdev1", 00:06:44.039 "uuid": "04f15533-a0cf-4f4a-aed1-47070d42d54f", 00:06:44.039 "strip_size_kb": 64, 00:06:44.039 "state": "configuring", 00:06:44.039 "raid_level": "raid0", 00:06:44.039 "superblock": true, 00:06:44.039 "num_base_bdevs": 3, 00:06:44.039 "num_base_bdevs_discovered": 1, 00:06:44.039 "num_base_bdevs_operational": 3, 00:06:44.039 "base_bdevs_list": [ 00:06:44.039 { 00:06:44.039 "name": "pt1", 00:06:44.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:44.039 "is_configured": true, 00:06:44.039 "data_offset": 2048, 00:06:44.039 "data_size": 63488 00:06:44.039 }, 00:06:44.039 { 00:06:44.039 "name": null, 00:06:44.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:44.039 "is_configured": false, 00:06:44.039 "data_offset": 2048, 00:06:44.039 "data_size": 63488 00:06:44.039 }, 00:06:44.039 { 00:06:44.039 "name": null, 00:06:44.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:06:44.039 "is_configured": false, 00:06:44.039 "data_offset": 2048, 00:06:44.039 "data_size": 63488 00:06:44.039 } 00:06:44.039 ] 00:06:44.039 }' 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.039 19:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.354 [2024-11-26 19:47:35.094525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:44.354 [2024-11-26 19:47:35.094601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.354 [2024-11-26 19:47:35.094624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:06:44.354 [2024-11-26 19:47:35.094634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.354 [2024-11-26 19:47:35.095123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.354 [2024-11-26 19:47:35.095153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:44.354 [2024-11-26 19:47:35.095241] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:44.354 [2024-11-26 19:47:35.095267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:44.354 pt2 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.354 [2024-11-26 19:47:35.102526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.354 "name": "raid_bdev1", 00:06:44.354 "uuid": "04f15533-a0cf-4f4a-aed1-47070d42d54f", 00:06:44.354 "strip_size_kb": 64, 00:06:44.354 "state": "configuring", 00:06:44.354 "raid_level": "raid0", 00:06:44.354 "superblock": true, 00:06:44.354 "num_base_bdevs": 3, 00:06:44.354 "num_base_bdevs_discovered": 1, 00:06:44.354 "num_base_bdevs_operational": 3, 00:06:44.354 "base_bdevs_list": [ 00:06:44.354 { 00:06:44.354 "name": "pt1", 00:06:44.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:44.354 "is_configured": true, 00:06:44.354 "data_offset": 2048, 00:06:44.354 "data_size": 63488 00:06:44.354 }, 00:06:44.354 { 00:06:44.354 "name": null, 00:06:44.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:44.354 "is_configured": false, 00:06:44.354 "data_offset": 0, 00:06:44.354 "data_size": 63488 00:06:44.354 }, 00:06:44.354 { 00:06:44.354 "name": null, 00:06:44.354 "uuid": "00000000-0000-0000-0000-000000000003", 00:06:44.354 "is_configured": false, 00:06:44.354 "data_offset": 2048, 00:06:44.354 "data_size": 63488 00:06:44.354 } 00:06:44.354 ] 00:06:44.354 }' 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.354 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.616 [2024-11-26 19:47:35.426560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:44.616 [2024-11-26 19:47:35.426639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.616 [2024-11-26 19:47:35.426658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:06:44.616 [2024-11-26 19:47:35.426669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.616 [2024-11-26 19:47:35.427179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.616 [2024-11-26 19:47:35.427216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:44.616 [2024-11-26 19:47:35.427295] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:44.616 [2024-11-26 19:47:35.427317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:44.616 pt2 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.616 [2024-11-26 19:47:35.434542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:06:44.616 [2024-11-26 19:47:35.434586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.616 [2024-11-26 19:47:35.434599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:06:44.616 [2024-11-26 19:47:35.434608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.616 [2024-11-26 19:47:35.434989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.616 [2024-11-26 19:47:35.435016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:06:44.616 [2024-11-26 19:47:35.435100] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:06:44.616 [2024-11-26 19:47:35.435125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:06:44.616 [2024-11-26 19:47:35.435257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:44.616 [2024-11-26 19:47:35.435274] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:06:44.616 [2024-11-26 19:47:35.435508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:44.616 [2024-11-26 19:47:35.435622] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:44.616 [2024-11-26 19:47:35.435707] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:44.616 [2024-11-26 19:47:35.435825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.616 pt3 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.616 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.616 "name": "raid_bdev1", 00:06:44.616 "uuid": "04f15533-a0cf-4f4a-aed1-47070d42d54f", 00:06:44.616 "strip_size_kb": 64, 00:06:44.616 "state": "online", 00:06:44.616 "raid_level": "raid0", 00:06:44.616 "superblock": true, 00:06:44.616 "num_base_bdevs": 3, 00:06:44.616 "num_base_bdevs_discovered": 3, 00:06:44.616 "num_base_bdevs_operational": 3, 00:06:44.616 "base_bdevs_list": [ 00:06:44.616 { 00:06:44.616 "name": "pt1", 00:06:44.616 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:44.616 "is_configured": true, 00:06:44.616 "data_offset": 2048, 00:06:44.616 "data_size": 63488 00:06:44.616 }, 00:06:44.616 { 00:06:44.616 "name": "pt2", 00:06:44.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:44.616 "is_configured": true, 00:06:44.616 "data_offset": 2048, 00:06:44.616 "data_size": 63488 00:06:44.616 }, 00:06:44.616 { 00:06:44.616 "name": "pt3", 00:06:44.616 "uuid": "00000000-0000-0000-0000-000000000003", 00:06:44.616 "is_configured": true, 00:06:44.616 "data_offset": 2048, 00:06:44.616 "data_size": 63488 00:06:44.616 } 00:06:44.616 ] 00:06:44.617 }' 00:06:44.617 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.617 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.874 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:44.874 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:44.874 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:44.874 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:44.874 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:44.874 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:44.874 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:44.874 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:44.874 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.874 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.874 [2024-11-26 19:47:35.743111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:44.874 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.874 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:44.874 "name": "raid_bdev1", 00:06:44.874 "aliases": [ 00:06:44.874 "04f15533-a0cf-4f4a-aed1-47070d42d54f" 00:06:44.874 ], 00:06:44.874 "product_name": "Raid Volume", 00:06:44.874 "block_size": 512, 00:06:44.874 "num_blocks": 190464, 00:06:44.874 "uuid": "04f15533-a0cf-4f4a-aed1-47070d42d54f", 00:06:44.874 "assigned_rate_limits": { 00:06:44.874 "rw_ios_per_sec": 0, 00:06:44.874 "rw_mbytes_per_sec": 0, 00:06:44.874 "r_mbytes_per_sec": 0, 00:06:44.874 "w_mbytes_per_sec": 0 00:06:44.874 }, 00:06:44.874 "claimed": false, 00:06:44.874 "zoned": false, 00:06:44.874 "supported_io_types": { 00:06:44.874 "read": true, 00:06:44.874 "write": true, 00:06:44.874 "unmap": true, 00:06:44.874 "flush": true, 00:06:44.874 "reset": true, 00:06:44.874 "nvme_admin": false, 00:06:44.874 "nvme_io": false, 00:06:44.874 "nvme_io_md": false, 00:06:44.874 "write_zeroes": true, 00:06:44.874 "zcopy": false, 00:06:44.874 "get_zone_info": false, 00:06:44.874 "zone_management": false, 00:06:44.874 "zone_append": false, 00:06:44.874 "compare": false, 00:06:44.874 "compare_and_write": false, 00:06:44.874 "abort": false, 00:06:44.874 "seek_hole": false, 00:06:44.874 "seek_data": false, 00:06:44.874 "copy": false, 00:06:44.874 "nvme_iov_md": false 00:06:44.874 }, 00:06:44.874 "memory_domains": [ 00:06:44.874 { 00:06:44.874 "dma_device_id": "system", 00:06:44.874 "dma_device_type": 1 00:06:44.874 }, 00:06:44.874 { 00:06:44.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.874 "dma_device_type": 2 00:06:44.874 }, 00:06:44.874 { 00:06:44.874 "dma_device_id": "system", 00:06:44.874 "dma_device_type": 1 00:06:44.874 }, 00:06:44.874 { 00:06:44.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.874 "dma_device_type": 2 00:06:44.874 }, 00:06:44.874 { 00:06:44.874 "dma_device_id": "system", 00:06:44.874 "dma_device_type": 1 00:06:44.874 }, 00:06:44.874 { 00:06:44.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.874 "dma_device_type": 2 00:06:44.874 } 00:06:44.874 ], 00:06:44.874 "driver_specific": { 00:06:44.874 "raid": { 00:06:44.874 "uuid": "04f15533-a0cf-4f4a-aed1-47070d42d54f", 00:06:44.874 "strip_size_kb": 64, 00:06:44.874 "state": "online", 00:06:44.874 "raid_level": "raid0", 00:06:44.874 "superblock": true, 00:06:44.874 "num_base_bdevs": 3, 00:06:44.874 "num_base_bdevs_discovered": 3, 00:06:44.874 "num_base_bdevs_operational": 3, 00:06:44.874 "base_bdevs_list": [ 00:06:44.874 { 00:06:44.874 "name": "pt1", 00:06:44.874 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:44.875 "is_configured": true, 00:06:44.875 "data_offset": 2048, 00:06:44.875 "data_size": 63488 00:06:44.875 }, 00:06:44.875 { 00:06:44.875 "name": "pt2", 00:06:44.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:44.875 "is_configured": true, 00:06:44.875 "data_offset": 2048, 00:06:44.875 "data_size": 63488 00:06:44.875 }, 00:06:44.875 { 00:06:44.875 "name": "pt3", 00:06:44.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:06:44.875 "is_configured": true, 00:06:44.875 "data_offset": 2048, 00:06:44.875 "data_size": 63488 00:06:44.875 } 00:06:44.875 ] 00:06:44.875 } 00:06:44.875 } 00:06:44.875 }' 00:06:44.875 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:44.875 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:44.875 pt2 00:06:44.875 pt3' 00:06:44.875 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.132 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.133 [2024-11-26 19:47:35.942905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 04f15533-a0cf-4f4a-aed1-47070d42d54f '!=' 04f15533-a0cf-4f4a-aed1-47070d42d54f ']' 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63588 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63588 ']' 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63588 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63588 00:06:45.133 killing process with pid 63588 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63588' 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63588 00:06:45.133 [2024-11-26 19:47:35.994252] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:45.133 19:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63588 00:06:45.133 [2024-11-26 19:47:35.994373] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:45.133 [2024-11-26 19:47:35.994437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:45.133 [2024-11-26 19:47:35.994448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:45.390 [2024-11-26 19:47:36.155954] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:45.954 ************************************ 00:06:45.954 END TEST raid_superblock_test 00:06:45.954 ************************************ 00:06:45.954 19:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:45.954 00:06:45.954 real 0m3.721s 00:06:45.954 user 0m5.310s 00:06:45.954 sys 0m0.668s 00:06:45.954 19:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.954 19:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.954 19:47:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:06:45.954 19:47:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:45.954 19:47:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.954 19:47:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:45.954 ************************************ 00:06:45.954 START TEST raid_read_error_test 00:06:45.954 ************************************ 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:45.954 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JW0PZ0mz6e 00:06:45.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63830 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63830 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63830 ']' 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:45.955 19:47:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.212 [2024-11-26 19:47:36.913844] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:46.212 [2024-11-26 19:47:36.913984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63830 ] 00:06:46.212 [2024-11-26 19:47:37.071674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.469 [2024-11-26 19:47:37.175418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.469 [2024-11-26 19:47:37.299315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.469 [2024-11-26 19:47:37.299376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.033 BaseBdev1_malloc 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.033 true 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.033 [2024-11-26 19:47:37.789256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:47.033 [2024-11-26 19:47:37.789313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.033 [2024-11-26 19:47:37.789331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:47.033 [2024-11-26 19:47:37.789349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.033 [2024-11-26 19:47:37.791415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.033 [2024-11-26 19:47:37.791448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:47.033 BaseBdev1 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.033 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.033 BaseBdev2_malloc 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 true 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 [2024-11-26 19:47:37.835894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:47.034 [2024-11-26 19:47:37.835946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.034 [2024-11-26 19:47:37.835961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:47.034 [2024-11-26 19:47:37.835971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.034 [2024-11-26 19:47:37.837873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.034 [2024-11-26 19:47:37.837908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:47.034 BaseBdev2 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 BaseBdev3_malloc 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 true 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 [2024-11-26 19:47:37.898749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:06:47.034 [2024-11-26 19:47:37.898950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.034 [2024-11-26 19:47:37.898980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:06:47.034 [2024-11-26 19:47:37.898993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.034 [2024-11-26 19:47:37.900967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.034 [2024-11-26 19:47:37.900998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:06:47.034 BaseBdev3 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 [2024-11-26 19:47:37.906818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:47.034 [2024-11-26 19:47:37.908572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:47.034 [2024-11-26 19:47:37.908636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:47.034 [2024-11-26 19:47:37.908813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:06:47.034 [2024-11-26 19:47:37.908824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:06:47.034 [2024-11-26 19:47:37.909060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:06:47.034 [2024-11-26 19:47:37.909185] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:06:47.034 [2024-11-26 19:47:37.909196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:06:47.034 [2024-11-26 19:47:37.909319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:47.034 "name": "raid_bdev1", 00:06:47.034 "uuid": "4351fbde-622c-4838-b8fb-a2d0de40f8af", 00:06:47.034 "strip_size_kb": 64, 00:06:47.034 "state": "online", 00:06:47.034 "raid_level": "raid0", 00:06:47.034 "superblock": true, 00:06:47.034 "num_base_bdevs": 3, 00:06:47.034 "num_base_bdevs_discovered": 3, 00:06:47.034 "num_base_bdevs_operational": 3, 00:06:47.034 "base_bdevs_list": [ 00:06:47.034 { 00:06:47.034 "name": "BaseBdev1", 00:06:47.034 "uuid": "077efa71-d551-5bcf-9161-a52575483006", 00:06:47.034 "is_configured": true, 00:06:47.034 "data_offset": 2048, 00:06:47.034 "data_size": 63488 00:06:47.034 }, 00:06:47.034 { 00:06:47.034 "name": "BaseBdev2", 00:06:47.034 "uuid": "5e6f1d77-b7b8-5ad8-b141-867078fcb62e", 00:06:47.034 "is_configured": true, 00:06:47.034 "data_offset": 2048, 00:06:47.034 "data_size": 63488 00:06:47.034 }, 00:06:47.034 { 00:06:47.034 "name": "BaseBdev3", 00:06:47.034 "uuid": "4228b9e1-db04-5fb9-ab3a-72de26ee5d21", 00:06:47.034 "is_configured": true, 00:06:47.034 "data_offset": 2048, 00:06:47.034 "data_size": 63488 00:06:47.034 } 00:06:47.034 ] 00:06:47.034 }' 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:47.034 19:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.598 19:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:47.598 19:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:47.598 [2024-11-26 19:47:38.335932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.531 "name": "raid_bdev1", 00:06:48.531 "uuid": "4351fbde-622c-4838-b8fb-a2d0de40f8af", 00:06:48.531 "strip_size_kb": 64, 00:06:48.531 "state": "online", 00:06:48.531 "raid_level": "raid0", 00:06:48.531 "superblock": true, 00:06:48.531 "num_base_bdevs": 3, 00:06:48.531 "num_base_bdevs_discovered": 3, 00:06:48.531 "num_base_bdevs_operational": 3, 00:06:48.531 "base_bdevs_list": [ 00:06:48.531 { 00:06:48.531 "name": "BaseBdev1", 00:06:48.531 "uuid": "077efa71-d551-5bcf-9161-a52575483006", 00:06:48.531 "is_configured": true, 00:06:48.531 "data_offset": 2048, 00:06:48.531 "data_size": 63488 00:06:48.531 }, 00:06:48.531 { 00:06:48.531 "name": "BaseBdev2", 00:06:48.531 "uuid": "5e6f1d77-b7b8-5ad8-b141-867078fcb62e", 00:06:48.531 "is_configured": true, 00:06:48.531 "data_offset": 2048, 00:06:48.531 "data_size": 63488 00:06:48.531 }, 00:06:48.531 { 00:06:48.531 "name": "BaseBdev3", 00:06:48.531 "uuid": "4228b9e1-db04-5fb9-ab3a-72de26ee5d21", 00:06:48.531 "is_configured": true, 00:06:48.531 "data_offset": 2048, 00:06:48.531 "data_size": 63488 00:06:48.531 } 00:06:48.531 ] 00:06:48.531 }' 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.531 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.788 [2024-11-26 19:47:39.573018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:48.788 [2024-11-26 19:47:39.573055] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:48.788 [2024-11-26 19:47:39.575644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:48.788 [2024-11-26 19:47:39.575696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:48.788 [2024-11-26 19:47:39.575730] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:48.788 [2024-11-26 19:47:39.575739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:06:48.788 { 00:06:48.788 "results": [ 00:06:48.788 { 00:06:48.788 "job": "raid_bdev1", 00:06:48.788 "core_mask": "0x1", 00:06:48.788 "workload": "randrw", 00:06:48.788 "percentage": 50, 00:06:48.788 "status": "finished", 00:06:48.788 "queue_depth": 1, 00:06:48.788 "io_size": 131072, 00:06:48.788 "runtime": 1.235415, 00:06:48.788 "iops": 16229.364221739253, 00:06:48.788 "mibps": 2028.6705277174067, 00:06:48.788 "io_failed": 1, 00:06:48.788 "io_timeout": 0, 00:06:48.788 "avg_latency_us": 85.02269244196529, 00:06:48.788 "min_latency_us": 19.889230769230767, 00:06:48.788 "max_latency_us": 1436.7507692307693 00:06:48.788 } 00:06:48.788 ], 00:06:48.788 "core_count": 1 00:06:48.788 } 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63830 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63830 ']' 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63830 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63830 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.788 killing process with pid 63830 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63830' 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63830 00:06:48.788 [2024-11-26 19:47:39.605787] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:48.788 19:47:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63830 00:06:49.046 [2024-11-26 19:47:39.731988] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:49.612 19:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JW0PZ0mz6e 00:06:49.612 19:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:49.612 19:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:49.612 19:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:06:49.612 19:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:49.612 19:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:49.612 19:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:49.612 19:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:06:49.612 00:06:49.612 real 0m3.585s 00:06:49.612 user 0m4.221s 00:06:49.612 sys 0m0.448s 00:06:49.612 19:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.612 ************************************ 00:06:49.612 END TEST raid_read_error_test 00:06:49.612 ************************************ 00:06:49.612 19:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.612 19:47:40 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:06:49.612 19:47:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:49.612 19:47:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.612 19:47:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:49.612 ************************************ 00:06:49.612 START TEST raid_write_error_test 00:06:49.612 ************************************ 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dZDDalRysr 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63959 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63959 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63959 ']' 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.612 19:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.871 [2024-11-26 19:47:40.561317] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:49.871 [2024-11-26 19:47:40.561471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63959 ] 00:06:49.871 [2024-11-26 19:47:40.718641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.128 [2024-11-26 19:47:40.826451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.128 [2024-11-26 19:47:40.953620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.128 [2024-11-26 19:47:40.953688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.696 BaseBdev1_malloc 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.696 true 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.696 [2024-11-26 19:47:41.457706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:50.696 [2024-11-26 19:47:41.457764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:50.696 [2024-11-26 19:47:41.457781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:50.696 [2024-11-26 19:47:41.457791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:50.696 [2024-11-26 19:47:41.459788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:50.696 [2024-11-26 19:47:41.459823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:50.696 BaseBdev1 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.696 BaseBdev2_malloc 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.696 true 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.696 [2024-11-26 19:47:41.500979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:50.696 [2024-11-26 19:47:41.501039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:50.696 [2024-11-26 19:47:41.501055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:50.696 [2024-11-26 19:47:41.501065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:50.696 [2024-11-26 19:47:41.503050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:50.696 [2024-11-26 19:47:41.503091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:50.696 BaseBdev2 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.696 BaseBdev3_malloc 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.696 true 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.696 [2024-11-26 19:47:41.563218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:06:50.696 [2024-11-26 19:47:41.563285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:50.696 [2024-11-26 19:47:41.563306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:06:50.696 [2024-11-26 19:47:41.563318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:50.696 [2024-11-26 19:47:41.565626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:50.696 [2024-11-26 19:47:41.565665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:06:50.696 BaseBdev3 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.696 [2024-11-26 19:47:41.571305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:50.696 [2024-11-26 19:47:41.573297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:50.696 [2024-11-26 19:47:41.573388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:50.696 [2024-11-26 19:47:41.573590] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:06:50.696 [2024-11-26 19:47:41.573603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:06:50.696 [2024-11-26 19:47:41.573862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:06:50.696 [2024-11-26 19:47:41.574008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:06:50.696 [2024-11-26 19:47:41.574022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:06:50.696 [2024-11-26 19:47:41.574169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:50.696 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:50.697 "name": "raid_bdev1", 00:06:50.697 "uuid": "f395f1b4-9821-48f5-ac66-821e7eb19af0", 00:06:50.697 "strip_size_kb": 64, 00:06:50.697 "state": "online", 00:06:50.697 "raid_level": "raid0", 00:06:50.697 "superblock": true, 00:06:50.697 "num_base_bdevs": 3, 00:06:50.697 "num_base_bdevs_discovered": 3, 00:06:50.697 "num_base_bdevs_operational": 3, 00:06:50.697 "base_bdevs_list": [ 00:06:50.697 { 00:06:50.697 "name": "BaseBdev1", 00:06:50.697 "uuid": "f2fd4275-bf99-5710-a3ae-8d5d8d7b4d90", 00:06:50.697 "is_configured": true, 00:06:50.697 "data_offset": 2048, 00:06:50.697 "data_size": 63488 00:06:50.697 }, 00:06:50.697 { 00:06:50.697 "name": "BaseBdev2", 00:06:50.697 "uuid": "e0f9e7ae-1ede-5772-bfcd-23c30ac78a34", 00:06:50.697 "is_configured": true, 00:06:50.697 "data_offset": 2048, 00:06:50.697 "data_size": 63488 00:06:50.697 }, 00:06:50.697 { 00:06:50.697 "name": "BaseBdev3", 00:06:50.697 "uuid": "dc6dbf6e-b114-5a72-87c2-efb376d94326", 00:06:50.697 "is_configured": true, 00:06:50.697 "data_offset": 2048, 00:06:50.697 "data_size": 63488 00:06:50.697 } 00:06:50.697 ] 00:06:50.697 }' 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:50.697 19:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.017 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:51.017 19:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:51.276 [2024-11-26 19:47:41.980501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:06:52.208 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.209 "name": "raid_bdev1", 00:06:52.209 "uuid": "f395f1b4-9821-48f5-ac66-821e7eb19af0", 00:06:52.209 "strip_size_kb": 64, 00:06:52.209 "state": "online", 00:06:52.209 "raid_level": "raid0", 00:06:52.209 "superblock": true, 00:06:52.209 "num_base_bdevs": 3, 00:06:52.209 "num_base_bdevs_discovered": 3, 00:06:52.209 "num_base_bdevs_operational": 3, 00:06:52.209 "base_bdevs_list": [ 00:06:52.209 { 00:06:52.209 "name": "BaseBdev1", 00:06:52.209 "uuid": "f2fd4275-bf99-5710-a3ae-8d5d8d7b4d90", 00:06:52.209 "is_configured": true, 00:06:52.209 "data_offset": 2048, 00:06:52.209 "data_size": 63488 00:06:52.209 }, 00:06:52.209 { 00:06:52.209 "name": "BaseBdev2", 00:06:52.209 "uuid": "e0f9e7ae-1ede-5772-bfcd-23c30ac78a34", 00:06:52.209 "is_configured": true, 00:06:52.209 "data_offset": 2048, 00:06:52.209 "data_size": 63488 00:06:52.209 }, 00:06:52.209 { 00:06:52.209 "name": "BaseBdev3", 00:06:52.209 "uuid": "dc6dbf6e-b114-5a72-87c2-efb376d94326", 00:06:52.209 "is_configured": true, 00:06:52.209 "data_offset": 2048, 00:06:52.209 "data_size": 63488 00:06:52.209 } 00:06:52.209 ] 00:06:52.209 }' 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.209 19:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.467 [2024-11-26 19:47:43.218940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:52.467 [2024-11-26 19:47:43.218975] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:52.467 [2024-11-26 19:47:43.222111] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.467 [2024-11-26 19:47:43.222169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.467 [2024-11-26 19:47:43.222211] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.467 [2024-11-26 19:47:43.222221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:06:52.467 { 00:06:52.467 "results": [ 00:06:52.467 { 00:06:52.467 "job": "raid_bdev1", 00:06:52.467 "core_mask": "0x1", 00:06:52.467 "workload": "randrw", 00:06:52.467 "percentage": 50, 00:06:52.467 "status": "finished", 00:06:52.467 "queue_depth": 1, 00:06:52.467 "io_size": 131072, 00:06:52.467 "runtime": 1.236358, 00:06:52.467 "iops": 13727.41552204135, 00:06:52.467 "mibps": 1715.9269402551688, 00:06:52.467 "io_failed": 1, 00:06:52.467 "io_timeout": 0, 00:06:52.467 "avg_latency_us": 100.1182424574777, 00:06:52.467 "min_latency_us": 33.47692307692308, 00:06:52.467 "max_latency_us": 1739.2246153846154 00:06:52.467 } 00:06:52.467 ], 00:06:52.467 "core_count": 1 00:06:52.467 } 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63959 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63959 ']' 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63959 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63959 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.467 killing process with pid 63959 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63959' 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63959 00:06:52.467 [2024-11-26 19:47:43.250375] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.467 19:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63959 00:06:52.725 [2024-11-26 19:47:43.404397] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:53.658 19:47:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dZDDalRysr 00:06:53.658 19:47:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:53.658 19:47:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:53.658 19:47:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:06:53.658 19:47:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:53.658 19:47:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:53.658 19:47:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:53.658 19:47:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:06:53.658 00:06:53.658 real 0m3.747s 00:06:53.658 user 0m4.410s 00:06:53.658 sys 0m0.421s 00:06:53.658 19:47:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.658 19:47:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.658 ************************************ 00:06:53.658 END TEST raid_write_error_test 00:06:53.658 ************************************ 00:06:53.658 19:47:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:53.658 19:47:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:06:53.658 19:47:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:53.658 19:47:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.658 19:47:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:53.658 ************************************ 00:06:53.658 START TEST raid_state_function_test 00:06:53.658 ************************************ 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64097 00:06:53.658 Process raid pid: 64097 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64097' 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64097 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64097 ']' 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.658 19:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.658 [2024-11-26 19:47:44.360462] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:53.658 [2024-11-26 19:47:44.361064] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.658 [2024-11-26 19:47:44.527910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.916 [2024-11-26 19:47:44.655569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.916 [2024-11-26 19:47:44.812183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.916 [2024-11-26 19:47:44.812236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.482 [2024-11-26 19:47:45.229367] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:54.482 [2024-11-26 19:47:45.229434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:54.482 [2024-11-26 19:47:45.229445] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:54.482 [2024-11-26 19:47:45.229456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:54.482 [2024-11-26 19:47:45.229463] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:54.482 [2024-11-26 19:47:45.229472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.482 "name": "Existed_Raid", 00:06:54.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.482 "strip_size_kb": 64, 00:06:54.482 "state": "configuring", 00:06:54.482 "raid_level": "concat", 00:06:54.482 "superblock": false, 00:06:54.482 "num_base_bdevs": 3, 00:06:54.482 "num_base_bdevs_discovered": 0, 00:06:54.482 "num_base_bdevs_operational": 3, 00:06:54.482 "base_bdevs_list": [ 00:06:54.482 { 00:06:54.482 "name": "BaseBdev1", 00:06:54.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.482 "is_configured": false, 00:06:54.482 "data_offset": 0, 00:06:54.482 "data_size": 0 00:06:54.482 }, 00:06:54.482 { 00:06:54.482 "name": "BaseBdev2", 00:06:54.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.482 "is_configured": false, 00:06:54.482 "data_offset": 0, 00:06:54.482 "data_size": 0 00:06:54.482 }, 00:06:54.482 { 00:06:54.482 "name": "BaseBdev3", 00:06:54.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.482 "is_configured": false, 00:06:54.482 "data_offset": 0, 00:06:54.482 "data_size": 0 00:06:54.482 } 00:06:54.482 ] 00:06:54.482 }' 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.482 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.740 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:54.740 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.740 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.740 [2024-11-26 19:47:45.557472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:54.740 [2024-11-26 19:47:45.557533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:54.740 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.740 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:54.740 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.740 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.740 [2024-11-26 19:47:45.565387] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:54.740 [2024-11-26 19:47:45.565435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:54.740 [2024-11-26 19:47:45.565443] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:54.740 [2024-11-26 19:47:45.565453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:54.740 [2024-11-26 19:47:45.565459] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:54.740 [2024-11-26 19:47:45.565468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:54.740 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.741 [2024-11-26 19:47:45.600390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:54.741 BaseBdev1 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.741 [ 00:06:54.741 { 00:06:54.741 "name": "BaseBdev1", 00:06:54.741 "aliases": [ 00:06:54.741 "74433416-037a-48e2-b6d4-99414ba7469f" 00:06:54.741 ], 00:06:54.741 "product_name": "Malloc disk", 00:06:54.741 "block_size": 512, 00:06:54.741 "num_blocks": 65536, 00:06:54.741 "uuid": "74433416-037a-48e2-b6d4-99414ba7469f", 00:06:54.741 "assigned_rate_limits": { 00:06:54.741 "rw_ios_per_sec": 0, 00:06:54.741 "rw_mbytes_per_sec": 0, 00:06:54.741 "r_mbytes_per_sec": 0, 00:06:54.741 "w_mbytes_per_sec": 0 00:06:54.741 }, 00:06:54.741 "claimed": true, 00:06:54.741 "claim_type": "exclusive_write", 00:06:54.741 "zoned": false, 00:06:54.741 "supported_io_types": { 00:06:54.741 "read": true, 00:06:54.741 "write": true, 00:06:54.741 "unmap": true, 00:06:54.741 "flush": true, 00:06:54.741 "reset": true, 00:06:54.741 "nvme_admin": false, 00:06:54.741 "nvme_io": false, 00:06:54.741 "nvme_io_md": false, 00:06:54.741 "write_zeroes": true, 00:06:54.741 "zcopy": true, 00:06:54.741 "get_zone_info": false, 00:06:54.741 "zone_management": false, 00:06:54.741 "zone_append": false, 00:06:54.741 "compare": false, 00:06:54.741 "compare_and_write": false, 00:06:54.741 "abort": true, 00:06:54.741 "seek_hole": false, 00:06:54.741 "seek_data": false, 00:06:54.741 "copy": true, 00:06:54.741 "nvme_iov_md": false 00:06:54.741 }, 00:06:54.741 "memory_domains": [ 00:06:54.741 { 00:06:54.741 "dma_device_id": "system", 00:06:54.741 "dma_device_type": 1 00:06:54.741 }, 00:06:54.741 { 00:06:54.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.741 "dma_device_type": 2 00:06:54.741 } 00:06:54.741 ], 00:06:54.741 "driver_specific": {} 00:06:54.741 } 00:06:54.741 ] 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.741 "name": "Existed_Raid", 00:06:54.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.741 "strip_size_kb": 64, 00:06:54.741 "state": "configuring", 00:06:54.741 "raid_level": "concat", 00:06:54.741 "superblock": false, 00:06:54.741 "num_base_bdevs": 3, 00:06:54.741 "num_base_bdevs_discovered": 1, 00:06:54.741 "num_base_bdevs_operational": 3, 00:06:54.741 "base_bdevs_list": [ 00:06:54.741 { 00:06:54.741 "name": "BaseBdev1", 00:06:54.741 "uuid": "74433416-037a-48e2-b6d4-99414ba7469f", 00:06:54.741 "is_configured": true, 00:06:54.741 "data_offset": 0, 00:06:54.741 "data_size": 65536 00:06:54.741 }, 00:06:54.741 { 00:06:54.741 "name": "BaseBdev2", 00:06:54.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.741 "is_configured": false, 00:06:54.741 "data_offset": 0, 00:06:54.741 "data_size": 0 00:06:54.741 }, 00:06:54.741 { 00:06:54.741 "name": "BaseBdev3", 00:06:54.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.741 "is_configured": false, 00:06:54.741 "data_offset": 0, 00:06:54.741 "data_size": 0 00:06:54.741 } 00:06:54.741 ] 00:06:54.741 }' 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.741 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.306 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:55.306 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.306 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.306 [2024-11-26 19:47:45.948541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:55.306 [2024-11-26 19:47:45.948605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:55.306 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.306 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:55.306 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.306 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.306 [2024-11-26 19:47:45.956601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:55.306 [2024-11-26 19:47:45.958619] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:55.306 [2024-11-26 19:47:45.958667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:55.306 [2024-11-26 19:47:45.958677] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:55.306 [2024-11-26 19:47:45.958686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:55.306 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.306 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:55.306 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:55.306 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.307 "name": "Existed_Raid", 00:06:55.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.307 "strip_size_kb": 64, 00:06:55.307 "state": "configuring", 00:06:55.307 "raid_level": "concat", 00:06:55.307 "superblock": false, 00:06:55.307 "num_base_bdevs": 3, 00:06:55.307 "num_base_bdevs_discovered": 1, 00:06:55.307 "num_base_bdevs_operational": 3, 00:06:55.307 "base_bdevs_list": [ 00:06:55.307 { 00:06:55.307 "name": "BaseBdev1", 00:06:55.307 "uuid": "74433416-037a-48e2-b6d4-99414ba7469f", 00:06:55.307 "is_configured": true, 00:06:55.307 "data_offset": 0, 00:06:55.307 "data_size": 65536 00:06:55.307 }, 00:06:55.307 { 00:06:55.307 "name": "BaseBdev2", 00:06:55.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.307 "is_configured": false, 00:06:55.307 "data_offset": 0, 00:06:55.307 "data_size": 0 00:06:55.307 }, 00:06:55.307 { 00:06:55.307 "name": "BaseBdev3", 00:06:55.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.307 "is_configured": false, 00:06:55.307 "data_offset": 0, 00:06:55.307 "data_size": 0 00:06:55.307 } 00:06:55.307 ] 00:06:55.307 }' 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.307 19:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.565 [2024-11-26 19:47:46.313130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:55.565 BaseBdev2 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.565 [ 00:06:55.565 { 00:06:55.565 "name": "BaseBdev2", 00:06:55.565 "aliases": [ 00:06:55.565 "4c2565a8-3043-45a9-b863-08e5f9304459" 00:06:55.565 ], 00:06:55.565 "product_name": "Malloc disk", 00:06:55.565 "block_size": 512, 00:06:55.565 "num_blocks": 65536, 00:06:55.565 "uuid": "4c2565a8-3043-45a9-b863-08e5f9304459", 00:06:55.565 "assigned_rate_limits": { 00:06:55.565 "rw_ios_per_sec": 0, 00:06:55.565 "rw_mbytes_per_sec": 0, 00:06:55.565 "r_mbytes_per_sec": 0, 00:06:55.565 "w_mbytes_per_sec": 0 00:06:55.565 }, 00:06:55.565 "claimed": true, 00:06:55.565 "claim_type": "exclusive_write", 00:06:55.565 "zoned": false, 00:06:55.565 "supported_io_types": { 00:06:55.565 "read": true, 00:06:55.565 "write": true, 00:06:55.565 "unmap": true, 00:06:55.565 "flush": true, 00:06:55.565 "reset": true, 00:06:55.565 "nvme_admin": false, 00:06:55.565 "nvme_io": false, 00:06:55.565 "nvme_io_md": false, 00:06:55.565 "write_zeroes": true, 00:06:55.565 "zcopy": true, 00:06:55.565 "get_zone_info": false, 00:06:55.565 "zone_management": false, 00:06:55.565 "zone_append": false, 00:06:55.565 "compare": false, 00:06:55.565 "compare_and_write": false, 00:06:55.565 "abort": true, 00:06:55.565 "seek_hole": false, 00:06:55.565 "seek_data": false, 00:06:55.565 "copy": true, 00:06:55.565 "nvme_iov_md": false 00:06:55.565 }, 00:06:55.565 "memory_domains": [ 00:06:55.565 { 00:06:55.565 "dma_device_id": "system", 00:06:55.565 "dma_device_type": 1 00:06:55.565 }, 00:06:55.565 { 00:06:55.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.565 "dma_device_type": 2 00:06:55.565 } 00:06:55.565 ], 00:06:55.565 "driver_specific": {} 00:06:55.565 } 00:06:55.565 ] 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.565 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.565 "name": "Existed_Raid", 00:06:55.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.565 "strip_size_kb": 64, 00:06:55.565 "state": "configuring", 00:06:55.565 "raid_level": "concat", 00:06:55.565 "superblock": false, 00:06:55.565 "num_base_bdevs": 3, 00:06:55.565 "num_base_bdevs_discovered": 2, 00:06:55.565 "num_base_bdevs_operational": 3, 00:06:55.565 "base_bdevs_list": [ 00:06:55.565 { 00:06:55.565 "name": "BaseBdev1", 00:06:55.565 "uuid": "74433416-037a-48e2-b6d4-99414ba7469f", 00:06:55.565 "is_configured": true, 00:06:55.565 "data_offset": 0, 00:06:55.565 "data_size": 65536 00:06:55.565 }, 00:06:55.565 { 00:06:55.565 "name": "BaseBdev2", 00:06:55.565 "uuid": "4c2565a8-3043-45a9-b863-08e5f9304459", 00:06:55.565 "is_configured": true, 00:06:55.565 "data_offset": 0, 00:06:55.565 "data_size": 65536 00:06:55.565 }, 00:06:55.565 { 00:06:55.565 "name": "BaseBdev3", 00:06:55.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.566 "is_configured": false, 00:06:55.566 "data_offset": 0, 00:06:55.566 "data_size": 0 00:06:55.566 } 00:06:55.566 ] 00:06:55.566 }' 00:06:55.566 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.566 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.823 [2024-11-26 19:47:46.703092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:55.823 [2024-11-26 19:47:46.703152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:55.823 [2024-11-26 19:47:46.703165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:55.823 [2024-11-26 19:47:46.703474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:55.823 [2024-11-26 19:47:46.703644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:55.823 [2024-11-26 19:47:46.703653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:55.823 BaseBdev3 00:06:55.823 [2024-11-26 19:47:46.703922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.823 [ 00:06:55.823 { 00:06:55.823 "name": "BaseBdev3", 00:06:55.823 "aliases": [ 00:06:55.823 "35256a5f-6726-4f90-b864-4152c1cfbf52" 00:06:55.823 ], 00:06:55.823 "product_name": "Malloc disk", 00:06:55.823 "block_size": 512, 00:06:55.823 "num_blocks": 65536, 00:06:55.823 "uuid": "35256a5f-6726-4f90-b864-4152c1cfbf52", 00:06:55.823 "assigned_rate_limits": { 00:06:55.823 "rw_ios_per_sec": 0, 00:06:55.823 "rw_mbytes_per_sec": 0, 00:06:55.823 "r_mbytes_per_sec": 0, 00:06:55.823 "w_mbytes_per_sec": 0 00:06:55.823 }, 00:06:55.823 "claimed": true, 00:06:55.823 "claim_type": "exclusive_write", 00:06:55.823 "zoned": false, 00:06:55.823 "supported_io_types": { 00:06:55.823 "read": true, 00:06:55.823 "write": true, 00:06:55.823 "unmap": true, 00:06:55.823 "flush": true, 00:06:55.823 "reset": true, 00:06:55.823 "nvme_admin": false, 00:06:55.823 "nvme_io": false, 00:06:55.823 "nvme_io_md": false, 00:06:55.823 "write_zeroes": true, 00:06:55.823 "zcopy": true, 00:06:55.823 "get_zone_info": false, 00:06:55.823 "zone_management": false, 00:06:55.823 "zone_append": false, 00:06:55.823 "compare": false, 00:06:55.823 "compare_and_write": false, 00:06:55.823 "abort": true, 00:06:55.823 "seek_hole": false, 00:06:55.823 "seek_data": false, 00:06:55.823 "copy": true, 00:06:55.823 "nvme_iov_md": false 00:06:55.823 }, 00:06:55.823 "memory_domains": [ 00:06:55.823 { 00:06:55.823 "dma_device_id": "system", 00:06:55.823 "dma_device_type": 1 00:06:55.823 }, 00:06:55.823 { 00:06:55.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.823 "dma_device_type": 2 00:06:55.823 } 00:06:55.823 ], 00:06:55.823 "driver_specific": {} 00:06:55.823 } 00:06:55.823 ] 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.823 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.080 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.080 "name": "Existed_Raid", 00:06:56.080 "uuid": "ca837e16-f604-4d60-bd7d-a1facceb6919", 00:06:56.080 "strip_size_kb": 64, 00:06:56.080 "state": "online", 00:06:56.080 "raid_level": "concat", 00:06:56.080 "superblock": false, 00:06:56.080 "num_base_bdevs": 3, 00:06:56.080 "num_base_bdevs_discovered": 3, 00:06:56.080 "num_base_bdevs_operational": 3, 00:06:56.080 "base_bdevs_list": [ 00:06:56.080 { 00:06:56.080 "name": "BaseBdev1", 00:06:56.080 "uuid": "74433416-037a-48e2-b6d4-99414ba7469f", 00:06:56.080 "is_configured": true, 00:06:56.080 "data_offset": 0, 00:06:56.080 "data_size": 65536 00:06:56.080 }, 00:06:56.080 { 00:06:56.080 "name": "BaseBdev2", 00:06:56.080 "uuid": "4c2565a8-3043-45a9-b863-08e5f9304459", 00:06:56.080 "is_configured": true, 00:06:56.080 "data_offset": 0, 00:06:56.080 "data_size": 65536 00:06:56.080 }, 00:06:56.080 { 00:06:56.080 "name": "BaseBdev3", 00:06:56.080 "uuid": "35256a5f-6726-4f90-b864-4152c1cfbf52", 00:06:56.080 "is_configured": true, 00:06:56.080 "data_offset": 0, 00:06:56.080 "data_size": 65536 00:06:56.080 } 00:06:56.080 ] 00:06:56.080 }' 00:06:56.080 19:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.080 19:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.337 [2024-11-26 19:47:47.099609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:56.337 "name": "Existed_Raid", 00:06:56.337 "aliases": [ 00:06:56.337 "ca837e16-f604-4d60-bd7d-a1facceb6919" 00:06:56.337 ], 00:06:56.337 "product_name": "Raid Volume", 00:06:56.337 "block_size": 512, 00:06:56.337 "num_blocks": 196608, 00:06:56.337 "uuid": "ca837e16-f604-4d60-bd7d-a1facceb6919", 00:06:56.337 "assigned_rate_limits": { 00:06:56.337 "rw_ios_per_sec": 0, 00:06:56.337 "rw_mbytes_per_sec": 0, 00:06:56.337 "r_mbytes_per_sec": 0, 00:06:56.337 "w_mbytes_per_sec": 0 00:06:56.337 }, 00:06:56.337 "claimed": false, 00:06:56.337 "zoned": false, 00:06:56.337 "supported_io_types": { 00:06:56.337 "read": true, 00:06:56.337 "write": true, 00:06:56.337 "unmap": true, 00:06:56.337 "flush": true, 00:06:56.337 "reset": true, 00:06:56.337 "nvme_admin": false, 00:06:56.337 "nvme_io": false, 00:06:56.337 "nvme_io_md": false, 00:06:56.337 "write_zeroes": true, 00:06:56.337 "zcopy": false, 00:06:56.337 "get_zone_info": false, 00:06:56.337 "zone_management": false, 00:06:56.337 "zone_append": false, 00:06:56.337 "compare": false, 00:06:56.337 "compare_and_write": false, 00:06:56.337 "abort": false, 00:06:56.337 "seek_hole": false, 00:06:56.337 "seek_data": false, 00:06:56.337 "copy": false, 00:06:56.337 "nvme_iov_md": false 00:06:56.337 }, 00:06:56.337 "memory_domains": [ 00:06:56.337 { 00:06:56.337 "dma_device_id": "system", 00:06:56.337 "dma_device_type": 1 00:06:56.337 }, 00:06:56.337 { 00:06:56.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.337 "dma_device_type": 2 00:06:56.337 }, 00:06:56.337 { 00:06:56.337 "dma_device_id": "system", 00:06:56.337 "dma_device_type": 1 00:06:56.337 }, 00:06:56.337 { 00:06:56.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.337 "dma_device_type": 2 00:06:56.337 }, 00:06:56.337 { 00:06:56.337 "dma_device_id": "system", 00:06:56.337 "dma_device_type": 1 00:06:56.337 }, 00:06:56.337 { 00:06:56.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.337 "dma_device_type": 2 00:06:56.337 } 00:06:56.337 ], 00:06:56.337 "driver_specific": { 00:06:56.337 "raid": { 00:06:56.337 "uuid": "ca837e16-f604-4d60-bd7d-a1facceb6919", 00:06:56.337 "strip_size_kb": 64, 00:06:56.337 "state": "online", 00:06:56.337 "raid_level": "concat", 00:06:56.337 "superblock": false, 00:06:56.337 "num_base_bdevs": 3, 00:06:56.337 "num_base_bdevs_discovered": 3, 00:06:56.337 "num_base_bdevs_operational": 3, 00:06:56.337 "base_bdevs_list": [ 00:06:56.337 { 00:06:56.337 "name": "BaseBdev1", 00:06:56.337 "uuid": "74433416-037a-48e2-b6d4-99414ba7469f", 00:06:56.337 "is_configured": true, 00:06:56.337 "data_offset": 0, 00:06:56.337 "data_size": 65536 00:06:56.337 }, 00:06:56.337 { 00:06:56.337 "name": "BaseBdev2", 00:06:56.337 "uuid": "4c2565a8-3043-45a9-b863-08e5f9304459", 00:06:56.337 "is_configured": true, 00:06:56.337 "data_offset": 0, 00:06:56.337 "data_size": 65536 00:06:56.337 }, 00:06:56.337 { 00:06:56.337 "name": "BaseBdev3", 00:06:56.337 "uuid": "35256a5f-6726-4f90-b864-4152c1cfbf52", 00:06:56.337 "is_configured": true, 00:06:56.337 "data_offset": 0, 00:06:56.337 "data_size": 65536 00:06:56.337 } 00:06:56.337 ] 00:06:56.337 } 00:06:56.337 } 00:06:56.337 }' 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:56.337 BaseBdev2 00:06:56.337 BaseBdev3' 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.337 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.595 [2024-11-26 19:47:47.287352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:56.595 [2024-11-26 19:47:47.287386] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:56.595 [2024-11-26 19:47:47.287448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.595 "name": "Existed_Raid", 00:06:56.595 "uuid": "ca837e16-f604-4d60-bd7d-a1facceb6919", 00:06:56.595 "strip_size_kb": 64, 00:06:56.595 "state": "offline", 00:06:56.595 "raid_level": "concat", 00:06:56.595 "superblock": false, 00:06:56.595 "num_base_bdevs": 3, 00:06:56.595 "num_base_bdevs_discovered": 2, 00:06:56.595 "num_base_bdevs_operational": 2, 00:06:56.595 "base_bdevs_list": [ 00:06:56.595 { 00:06:56.595 "name": null, 00:06:56.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.595 "is_configured": false, 00:06:56.595 "data_offset": 0, 00:06:56.595 "data_size": 65536 00:06:56.595 }, 00:06:56.595 { 00:06:56.595 "name": "BaseBdev2", 00:06:56.595 "uuid": "4c2565a8-3043-45a9-b863-08e5f9304459", 00:06:56.595 "is_configured": true, 00:06:56.595 "data_offset": 0, 00:06:56.595 "data_size": 65536 00:06:56.595 }, 00:06:56.595 { 00:06:56.595 "name": "BaseBdev3", 00:06:56.595 "uuid": "35256a5f-6726-4f90-b864-4152c1cfbf52", 00:06:56.595 "is_configured": true, 00:06:56.595 "data_offset": 0, 00:06:56.595 "data_size": 65536 00:06:56.595 } 00:06:56.595 ] 00:06:56.595 }' 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.595 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.852 [2024-11-26 19:47:47.710259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.852 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.110 [2024-11-26 19:47:47.816833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:06:57.110 [2024-11-26 19:47:47.816899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.110 BaseBdev2 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.110 [ 00:06:57.110 { 00:06:57.110 "name": "BaseBdev2", 00:06:57.110 "aliases": [ 00:06:57.110 "ff252eeb-1709-446e-a1b3-bcc053ebda6a" 00:06:57.110 ], 00:06:57.110 "product_name": "Malloc disk", 00:06:57.110 "block_size": 512, 00:06:57.110 "num_blocks": 65536, 00:06:57.110 "uuid": "ff252eeb-1709-446e-a1b3-bcc053ebda6a", 00:06:57.110 "assigned_rate_limits": { 00:06:57.110 "rw_ios_per_sec": 0, 00:06:57.110 "rw_mbytes_per_sec": 0, 00:06:57.110 "r_mbytes_per_sec": 0, 00:06:57.110 "w_mbytes_per_sec": 0 00:06:57.110 }, 00:06:57.110 "claimed": false, 00:06:57.110 "zoned": false, 00:06:57.110 "supported_io_types": { 00:06:57.110 "read": true, 00:06:57.110 "write": true, 00:06:57.110 "unmap": true, 00:06:57.110 "flush": true, 00:06:57.110 "reset": true, 00:06:57.110 "nvme_admin": false, 00:06:57.110 "nvme_io": false, 00:06:57.110 "nvme_io_md": false, 00:06:57.110 "write_zeroes": true, 00:06:57.110 "zcopy": true, 00:06:57.110 "get_zone_info": false, 00:06:57.110 "zone_management": false, 00:06:57.110 "zone_append": false, 00:06:57.110 "compare": false, 00:06:57.110 "compare_and_write": false, 00:06:57.110 "abort": true, 00:06:57.110 "seek_hole": false, 00:06:57.110 "seek_data": false, 00:06:57.110 "copy": true, 00:06:57.110 "nvme_iov_md": false 00:06:57.110 }, 00:06:57.110 "memory_domains": [ 00:06:57.110 { 00:06:57.110 "dma_device_id": "system", 00:06:57.110 "dma_device_type": 1 00:06:57.110 }, 00:06:57.110 { 00:06:57.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.110 "dma_device_type": 2 00:06:57.110 } 00:06:57.110 ], 00:06:57.110 "driver_specific": {} 00:06:57.110 } 00:06:57.110 ] 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.110 19:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.110 BaseBdev3 00:06:57.110 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.110 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:06:57.110 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:06:57.110 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:57.110 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:57.110 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:57.110 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:57.110 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:57.110 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.110 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.110 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.110 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:06:57.110 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.110 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.110 [ 00:06:57.110 { 00:06:57.110 "name": "BaseBdev3", 00:06:57.110 "aliases": [ 00:06:57.110 "1c33cfa0-a712-4876-a687-db611eaa6074" 00:06:57.110 ], 00:06:57.111 "product_name": "Malloc disk", 00:06:57.111 "block_size": 512, 00:06:57.111 "num_blocks": 65536, 00:06:57.111 "uuid": "1c33cfa0-a712-4876-a687-db611eaa6074", 00:06:57.111 "assigned_rate_limits": { 00:06:57.111 "rw_ios_per_sec": 0, 00:06:57.111 "rw_mbytes_per_sec": 0, 00:06:57.111 "r_mbytes_per_sec": 0, 00:06:57.111 "w_mbytes_per_sec": 0 00:06:57.111 }, 00:06:57.111 "claimed": false, 00:06:57.111 "zoned": false, 00:06:57.111 "supported_io_types": { 00:06:57.111 "read": true, 00:06:57.111 "write": true, 00:06:57.111 "unmap": true, 00:06:57.111 "flush": true, 00:06:57.111 "reset": true, 00:06:57.111 "nvme_admin": false, 00:06:57.111 "nvme_io": false, 00:06:57.111 "nvme_io_md": false, 00:06:57.111 "write_zeroes": true, 00:06:57.111 "zcopy": true, 00:06:57.111 "get_zone_info": false, 00:06:57.111 "zone_management": false, 00:06:57.111 "zone_append": false, 00:06:57.111 "compare": false, 00:06:57.111 "compare_and_write": false, 00:06:57.111 "abort": true, 00:06:57.111 "seek_hole": false, 00:06:57.111 "seek_data": false, 00:06:57.111 "copy": true, 00:06:57.111 "nvme_iov_md": false 00:06:57.111 }, 00:06:57.111 "memory_domains": [ 00:06:57.111 { 00:06:57.111 "dma_device_id": "system", 00:06:57.111 "dma_device_type": 1 00:06:57.111 }, 00:06:57.111 { 00:06:57.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.111 "dma_device_type": 2 00:06:57.111 } 00:06:57.111 ], 00:06:57.111 "driver_specific": {} 00:06:57.111 } 00:06:57.111 ] 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.111 [2024-11-26 19:47:48.029019] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:57.111 [2024-11-26 19:47:48.029074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:57.111 [2024-11-26 19:47:48.029100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:57.111 [2024-11-26 19:47:48.031144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.111 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.368 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.368 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.368 "name": "Existed_Raid", 00:06:57.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.368 "strip_size_kb": 64, 00:06:57.368 "state": "configuring", 00:06:57.368 "raid_level": "concat", 00:06:57.368 "superblock": false, 00:06:57.368 "num_base_bdevs": 3, 00:06:57.368 "num_base_bdevs_discovered": 2, 00:06:57.368 "num_base_bdevs_operational": 3, 00:06:57.368 "base_bdevs_list": [ 00:06:57.368 { 00:06:57.368 "name": "BaseBdev1", 00:06:57.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.368 "is_configured": false, 00:06:57.368 "data_offset": 0, 00:06:57.368 "data_size": 0 00:06:57.368 }, 00:06:57.368 { 00:06:57.368 "name": "BaseBdev2", 00:06:57.368 "uuid": "ff252eeb-1709-446e-a1b3-bcc053ebda6a", 00:06:57.368 "is_configured": true, 00:06:57.368 "data_offset": 0, 00:06:57.368 "data_size": 65536 00:06:57.368 }, 00:06:57.368 { 00:06:57.368 "name": "BaseBdev3", 00:06:57.368 "uuid": "1c33cfa0-a712-4876-a687-db611eaa6074", 00:06:57.368 "is_configured": true, 00:06:57.368 "data_offset": 0, 00:06:57.368 "data_size": 65536 00:06:57.368 } 00:06:57.368 ] 00:06:57.368 }' 00:06:57.368 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.368 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.634 [2024-11-26 19:47:48.345105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.634 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.634 "name": "Existed_Raid", 00:06:57.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.634 "strip_size_kb": 64, 00:06:57.634 "state": "configuring", 00:06:57.634 "raid_level": "concat", 00:06:57.634 "superblock": false, 00:06:57.634 "num_base_bdevs": 3, 00:06:57.634 "num_base_bdevs_discovered": 1, 00:06:57.634 "num_base_bdevs_operational": 3, 00:06:57.634 "base_bdevs_list": [ 00:06:57.634 { 00:06:57.634 "name": "BaseBdev1", 00:06:57.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.634 "is_configured": false, 00:06:57.634 "data_offset": 0, 00:06:57.634 "data_size": 0 00:06:57.634 }, 00:06:57.634 { 00:06:57.634 "name": null, 00:06:57.634 "uuid": "ff252eeb-1709-446e-a1b3-bcc053ebda6a", 00:06:57.634 "is_configured": false, 00:06:57.634 "data_offset": 0, 00:06:57.634 "data_size": 65536 00:06:57.634 }, 00:06:57.634 { 00:06:57.634 "name": "BaseBdev3", 00:06:57.635 "uuid": "1c33cfa0-a712-4876-a687-db611eaa6074", 00:06:57.635 "is_configured": true, 00:06:57.635 "data_offset": 0, 00:06:57.635 "data_size": 65536 00:06:57.635 } 00:06:57.635 ] 00:06:57.635 }' 00:06:57.635 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.635 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.892 [2024-11-26 19:47:48.710098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:57.892 BaseBdev1 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.892 [ 00:06:57.892 { 00:06:57.892 "name": "BaseBdev1", 00:06:57.892 "aliases": [ 00:06:57.892 "73114c6d-e3bd-4679-98d0-c4a30dbb4b48" 00:06:57.892 ], 00:06:57.892 "product_name": "Malloc disk", 00:06:57.892 "block_size": 512, 00:06:57.892 "num_blocks": 65536, 00:06:57.892 "uuid": "73114c6d-e3bd-4679-98d0-c4a30dbb4b48", 00:06:57.892 "assigned_rate_limits": { 00:06:57.892 "rw_ios_per_sec": 0, 00:06:57.892 "rw_mbytes_per_sec": 0, 00:06:57.892 "r_mbytes_per_sec": 0, 00:06:57.892 "w_mbytes_per_sec": 0 00:06:57.892 }, 00:06:57.892 "claimed": true, 00:06:57.892 "claim_type": "exclusive_write", 00:06:57.892 "zoned": false, 00:06:57.892 "supported_io_types": { 00:06:57.892 "read": true, 00:06:57.892 "write": true, 00:06:57.892 "unmap": true, 00:06:57.892 "flush": true, 00:06:57.892 "reset": true, 00:06:57.892 "nvme_admin": false, 00:06:57.892 "nvme_io": false, 00:06:57.892 "nvme_io_md": false, 00:06:57.892 "write_zeroes": true, 00:06:57.892 "zcopy": true, 00:06:57.892 "get_zone_info": false, 00:06:57.892 "zone_management": false, 00:06:57.892 "zone_append": false, 00:06:57.892 "compare": false, 00:06:57.892 "compare_and_write": false, 00:06:57.892 "abort": true, 00:06:57.892 "seek_hole": false, 00:06:57.892 "seek_data": false, 00:06:57.892 "copy": true, 00:06:57.892 "nvme_iov_md": false 00:06:57.892 }, 00:06:57.892 "memory_domains": [ 00:06:57.892 { 00:06:57.892 "dma_device_id": "system", 00:06:57.892 "dma_device_type": 1 00:06:57.892 }, 00:06:57.892 { 00:06:57.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.892 "dma_device_type": 2 00:06:57.892 } 00:06:57.892 ], 00:06:57.892 "driver_specific": {} 00:06:57.892 } 00:06:57.892 ] 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.892 "name": "Existed_Raid", 00:06:57.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.892 "strip_size_kb": 64, 00:06:57.892 "state": "configuring", 00:06:57.892 "raid_level": "concat", 00:06:57.892 "superblock": false, 00:06:57.892 "num_base_bdevs": 3, 00:06:57.892 "num_base_bdevs_discovered": 2, 00:06:57.892 "num_base_bdevs_operational": 3, 00:06:57.892 "base_bdevs_list": [ 00:06:57.892 { 00:06:57.892 "name": "BaseBdev1", 00:06:57.892 "uuid": "73114c6d-e3bd-4679-98d0-c4a30dbb4b48", 00:06:57.892 "is_configured": true, 00:06:57.892 "data_offset": 0, 00:06:57.892 "data_size": 65536 00:06:57.892 }, 00:06:57.892 { 00:06:57.892 "name": null, 00:06:57.892 "uuid": "ff252eeb-1709-446e-a1b3-bcc053ebda6a", 00:06:57.892 "is_configured": false, 00:06:57.892 "data_offset": 0, 00:06:57.892 "data_size": 65536 00:06:57.892 }, 00:06:57.892 { 00:06:57.892 "name": "BaseBdev3", 00:06:57.892 "uuid": "1c33cfa0-a712-4876-a687-db611eaa6074", 00:06:57.892 "is_configured": true, 00:06:57.892 "data_offset": 0, 00:06:57.892 "data_size": 65536 00:06:57.892 } 00:06:57.892 ] 00:06:57.892 }' 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.892 19:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.150 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:06:58.150 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.150 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.150 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.408 [2024-11-26 19:47:49.106530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.408 "name": "Existed_Raid", 00:06:58.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.408 "strip_size_kb": 64, 00:06:58.408 "state": "configuring", 00:06:58.408 "raid_level": "concat", 00:06:58.408 "superblock": false, 00:06:58.408 "num_base_bdevs": 3, 00:06:58.408 "num_base_bdevs_discovered": 1, 00:06:58.408 "num_base_bdevs_operational": 3, 00:06:58.408 "base_bdevs_list": [ 00:06:58.408 { 00:06:58.408 "name": "BaseBdev1", 00:06:58.408 "uuid": "73114c6d-e3bd-4679-98d0-c4a30dbb4b48", 00:06:58.408 "is_configured": true, 00:06:58.408 "data_offset": 0, 00:06:58.408 "data_size": 65536 00:06:58.408 }, 00:06:58.408 { 00:06:58.408 "name": null, 00:06:58.408 "uuid": "ff252eeb-1709-446e-a1b3-bcc053ebda6a", 00:06:58.408 "is_configured": false, 00:06:58.408 "data_offset": 0, 00:06:58.408 "data_size": 65536 00:06:58.408 }, 00:06:58.408 { 00:06:58.408 "name": null, 00:06:58.408 "uuid": "1c33cfa0-a712-4876-a687-db611eaa6074", 00:06:58.408 "is_configured": false, 00:06:58.408 "data_offset": 0, 00:06:58.408 "data_size": 65536 00:06:58.408 } 00:06:58.408 ] 00:06:58.408 }' 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.408 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.665 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:06:58.665 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.665 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.665 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.665 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.665 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:06:58.665 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:06:58.665 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.665 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.665 [2024-11-26 19:47:49.450693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:58.665 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.665 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:06:58.665 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.666 "name": "Existed_Raid", 00:06:58.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.666 "strip_size_kb": 64, 00:06:58.666 "state": "configuring", 00:06:58.666 "raid_level": "concat", 00:06:58.666 "superblock": false, 00:06:58.666 "num_base_bdevs": 3, 00:06:58.666 "num_base_bdevs_discovered": 2, 00:06:58.666 "num_base_bdevs_operational": 3, 00:06:58.666 "base_bdevs_list": [ 00:06:58.666 { 00:06:58.666 "name": "BaseBdev1", 00:06:58.666 "uuid": "73114c6d-e3bd-4679-98d0-c4a30dbb4b48", 00:06:58.666 "is_configured": true, 00:06:58.666 "data_offset": 0, 00:06:58.666 "data_size": 65536 00:06:58.666 }, 00:06:58.666 { 00:06:58.666 "name": null, 00:06:58.666 "uuid": "ff252eeb-1709-446e-a1b3-bcc053ebda6a", 00:06:58.666 "is_configured": false, 00:06:58.666 "data_offset": 0, 00:06:58.666 "data_size": 65536 00:06:58.666 }, 00:06:58.666 { 00:06:58.666 "name": "BaseBdev3", 00:06:58.666 "uuid": "1c33cfa0-a712-4876-a687-db611eaa6074", 00:06:58.666 "is_configured": true, 00:06:58.666 "data_offset": 0, 00:06:58.666 "data_size": 65536 00:06:58.666 } 00:06:58.666 ] 00:06:58.666 }' 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.666 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.923 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:06:58.923 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.923 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.923 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.923 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.923 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:06:58.923 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:58.923 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.923 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.923 [2024-11-26 19:47:49.810779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.181 "name": "Existed_Raid", 00:06:59.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.181 "strip_size_kb": 64, 00:06:59.181 "state": "configuring", 00:06:59.181 "raid_level": "concat", 00:06:59.181 "superblock": false, 00:06:59.181 "num_base_bdevs": 3, 00:06:59.181 "num_base_bdevs_discovered": 1, 00:06:59.181 "num_base_bdevs_operational": 3, 00:06:59.181 "base_bdevs_list": [ 00:06:59.181 { 00:06:59.181 "name": null, 00:06:59.181 "uuid": "73114c6d-e3bd-4679-98d0-c4a30dbb4b48", 00:06:59.181 "is_configured": false, 00:06:59.181 "data_offset": 0, 00:06:59.181 "data_size": 65536 00:06:59.181 }, 00:06:59.181 { 00:06:59.181 "name": null, 00:06:59.181 "uuid": "ff252eeb-1709-446e-a1b3-bcc053ebda6a", 00:06:59.181 "is_configured": false, 00:06:59.181 "data_offset": 0, 00:06:59.181 "data_size": 65536 00:06:59.181 }, 00:06:59.181 { 00:06:59.181 "name": "BaseBdev3", 00:06:59.181 "uuid": "1c33cfa0-a712-4876-a687-db611eaa6074", 00:06:59.181 "is_configured": true, 00:06:59.181 "data_offset": 0, 00:06:59.181 "data_size": 65536 00:06:59.181 } 00:06:59.181 ] 00:06:59.181 }' 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.181 19:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.439 [2024-11-26 19:47:50.230981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.439 "name": "Existed_Raid", 00:06:59.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.439 "strip_size_kb": 64, 00:06:59.439 "state": "configuring", 00:06:59.439 "raid_level": "concat", 00:06:59.439 "superblock": false, 00:06:59.439 "num_base_bdevs": 3, 00:06:59.439 "num_base_bdevs_discovered": 2, 00:06:59.439 "num_base_bdevs_operational": 3, 00:06:59.439 "base_bdevs_list": [ 00:06:59.439 { 00:06:59.439 "name": null, 00:06:59.439 "uuid": "73114c6d-e3bd-4679-98d0-c4a30dbb4b48", 00:06:59.439 "is_configured": false, 00:06:59.439 "data_offset": 0, 00:06:59.439 "data_size": 65536 00:06:59.439 }, 00:06:59.439 { 00:06:59.439 "name": "BaseBdev2", 00:06:59.439 "uuid": "ff252eeb-1709-446e-a1b3-bcc053ebda6a", 00:06:59.439 "is_configured": true, 00:06:59.439 "data_offset": 0, 00:06:59.439 "data_size": 65536 00:06:59.439 }, 00:06:59.439 { 00:06:59.439 "name": "BaseBdev3", 00:06:59.439 "uuid": "1c33cfa0-a712-4876-a687-db611eaa6074", 00:06:59.439 "is_configured": true, 00:06:59.439 "data_offset": 0, 00:06:59.439 "data_size": 65536 00:06:59.439 } 00:06:59.439 ] 00:06:59.439 }' 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.439 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.696 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.696 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.696 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.696 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:06:59.696 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.696 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:06:59.696 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.696 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:06:59.696 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.696 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.696 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.696 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 73114c6d-e3bd-4679-98d0-c4a30dbb4b48 00:06:59.696 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.696 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.954 [2024-11-26 19:47:50.643730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:06:59.954 [2024-11-26 19:47:50.643787] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:06:59.954 [2024-11-26 19:47:50.643797] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:59.954 [2024-11-26 19:47:50.644069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:59.954 [2024-11-26 19:47:50.644219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:06:59.954 [2024-11-26 19:47:50.644227] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:06:59.954 [2024-11-26 19:47:50.644495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.954 NewBaseBdev 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.954 [ 00:06:59.954 { 00:06:59.954 "name": "NewBaseBdev", 00:06:59.954 "aliases": [ 00:06:59.954 "73114c6d-e3bd-4679-98d0-c4a30dbb4b48" 00:06:59.954 ], 00:06:59.954 "product_name": "Malloc disk", 00:06:59.954 "block_size": 512, 00:06:59.954 "num_blocks": 65536, 00:06:59.954 "uuid": "73114c6d-e3bd-4679-98d0-c4a30dbb4b48", 00:06:59.954 "assigned_rate_limits": { 00:06:59.954 "rw_ios_per_sec": 0, 00:06:59.954 "rw_mbytes_per_sec": 0, 00:06:59.954 "r_mbytes_per_sec": 0, 00:06:59.954 "w_mbytes_per_sec": 0 00:06:59.954 }, 00:06:59.954 "claimed": true, 00:06:59.954 "claim_type": "exclusive_write", 00:06:59.954 "zoned": false, 00:06:59.954 "supported_io_types": { 00:06:59.954 "read": true, 00:06:59.954 "write": true, 00:06:59.954 "unmap": true, 00:06:59.954 "flush": true, 00:06:59.954 "reset": true, 00:06:59.954 "nvme_admin": false, 00:06:59.954 "nvme_io": false, 00:06:59.954 "nvme_io_md": false, 00:06:59.954 "write_zeroes": true, 00:06:59.954 "zcopy": true, 00:06:59.954 "get_zone_info": false, 00:06:59.954 "zone_management": false, 00:06:59.954 "zone_append": false, 00:06:59.954 "compare": false, 00:06:59.954 "compare_and_write": false, 00:06:59.954 "abort": true, 00:06:59.954 "seek_hole": false, 00:06:59.954 "seek_data": false, 00:06:59.954 "copy": true, 00:06:59.954 "nvme_iov_md": false 00:06:59.954 }, 00:06:59.954 "memory_domains": [ 00:06:59.954 { 00:06:59.954 "dma_device_id": "system", 00:06:59.954 "dma_device_type": 1 00:06:59.954 }, 00:06:59.954 { 00:06:59.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.954 "dma_device_type": 2 00:06:59.954 } 00:06:59.954 ], 00:06:59.954 "driver_specific": {} 00:06:59.954 } 00:06:59.954 ] 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.954 "name": "Existed_Raid", 00:06:59.954 "uuid": "1ff37644-f706-4811-9b82-c9dbe8e888ba", 00:06:59.954 "strip_size_kb": 64, 00:06:59.954 "state": "online", 00:06:59.954 "raid_level": "concat", 00:06:59.954 "superblock": false, 00:06:59.954 "num_base_bdevs": 3, 00:06:59.954 "num_base_bdevs_discovered": 3, 00:06:59.954 "num_base_bdevs_operational": 3, 00:06:59.954 "base_bdevs_list": [ 00:06:59.954 { 00:06:59.954 "name": "NewBaseBdev", 00:06:59.954 "uuid": "73114c6d-e3bd-4679-98d0-c4a30dbb4b48", 00:06:59.954 "is_configured": true, 00:06:59.954 "data_offset": 0, 00:06:59.954 "data_size": 65536 00:06:59.954 }, 00:06:59.954 { 00:06:59.954 "name": "BaseBdev2", 00:06:59.954 "uuid": "ff252eeb-1709-446e-a1b3-bcc053ebda6a", 00:06:59.954 "is_configured": true, 00:06:59.954 "data_offset": 0, 00:06:59.954 "data_size": 65536 00:06:59.954 }, 00:06:59.954 { 00:06:59.954 "name": "BaseBdev3", 00:06:59.954 "uuid": "1c33cfa0-a712-4876-a687-db611eaa6074", 00:06:59.954 "is_configured": true, 00:06:59.954 "data_offset": 0, 00:06:59.954 "data_size": 65536 00:06:59.954 } 00:06:59.954 ] 00:06:59.954 }' 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.954 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.212 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:00.212 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:00.212 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:00.212 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:00.212 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:00.212 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:00.212 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:00.212 19:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:00.212 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.212 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.212 [2024-11-26 19:47:50.988224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.212 19:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.212 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:00.212 "name": "Existed_Raid", 00:07:00.212 "aliases": [ 00:07:00.212 "1ff37644-f706-4811-9b82-c9dbe8e888ba" 00:07:00.212 ], 00:07:00.212 "product_name": "Raid Volume", 00:07:00.212 "block_size": 512, 00:07:00.212 "num_blocks": 196608, 00:07:00.212 "uuid": "1ff37644-f706-4811-9b82-c9dbe8e888ba", 00:07:00.212 "assigned_rate_limits": { 00:07:00.212 "rw_ios_per_sec": 0, 00:07:00.212 "rw_mbytes_per_sec": 0, 00:07:00.212 "r_mbytes_per_sec": 0, 00:07:00.212 "w_mbytes_per_sec": 0 00:07:00.212 }, 00:07:00.212 "claimed": false, 00:07:00.212 "zoned": false, 00:07:00.212 "supported_io_types": { 00:07:00.212 "read": true, 00:07:00.212 "write": true, 00:07:00.212 "unmap": true, 00:07:00.212 "flush": true, 00:07:00.212 "reset": true, 00:07:00.212 "nvme_admin": false, 00:07:00.212 "nvme_io": false, 00:07:00.212 "nvme_io_md": false, 00:07:00.212 "write_zeroes": true, 00:07:00.212 "zcopy": false, 00:07:00.212 "get_zone_info": false, 00:07:00.212 "zone_management": false, 00:07:00.212 "zone_append": false, 00:07:00.212 "compare": false, 00:07:00.212 "compare_and_write": false, 00:07:00.212 "abort": false, 00:07:00.212 "seek_hole": false, 00:07:00.212 "seek_data": false, 00:07:00.212 "copy": false, 00:07:00.212 "nvme_iov_md": false 00:07:00.212 }, 00:07:00.212 "memory_domains": [ 00:07:00.212 { 00:07:00.212 "dma_device_id": "system", 00:07:00.212 "dma_device_type": 1 00:07:00.212 }, 00:07:00.212 { 00:07:00.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.212 "dma_device_type": 2 00:07:00.212 }, 00:07:00.212 { 00:07:00.212 "dma_device_id": "system", 00:07:00.212 "dma_device_type": 1 00:07:00.212 }, 00:07:00.212 { 00:07:00.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.212 "dma_device_type": 2 00:07:00.212 }, 00:07:00.212 { 00:07:00.212 "dma_device_id": "system", 00:07:00.212 "dma_device_type": 1 00:07:00.212 }, 00:07:00.212 { 00:07:00.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.212 "dma_device_type": 2 00:07:00.212 } 00:07:00.212 ], 00:07:00.212 "driver_specific": { 00:07:00.212 "raid": { 00:07:00.212 "uuid": "1ff37644-f706-4811-9b82-c9dbe8e888ba", 00:07:00.212 "strip_size_kb": 64, 00:07:00.212 "state": "online", 00:07:00.213 "raid_level": "concat", 00:07:00.213 "superblock": false, 00:07:00.213 "num_base_bdevs": 3, 00:07:00.213 "num_base_bdevs_discovered": 3, 00:07:00.213 "num_base_bdevs_operational": 3, 00:07:00.213 "base_bdevs_list": [ 00:07:00.213 { 00:07:00.213 "name": "NewBaseBdev", 00:07:00.213 "uuid": "73114c6d-e3bd-4679-98d0-c4a30dbb4b48", 00:07:00.213 "is_configured": true, 00:07:00.213 "data_offset": 0, 00:07:00.213 "data_size": 65536 00:07:00.213 }, 00:07:00.213 { 00:07:00.213 "name": "BaseBdev2", 00:07:00.213 "uuid": "ff252eeb-1709-446e-a1b3-bcc053ebda6a", 00:07:00.213 "is_configured": true, 00:07:00.213 "data_offset": 0, 00:07:00.213 "data_size": 65536 00:07:00.213 }, 00:07:00.213 { 00:07:00.213 "name": "BaseBdev3", 00:07:00.213 "uuid": "1c33cfa0-a712-4876-a687-db611eaa6074", 00:07:00.213 "is_configured": true, 00:07:00.213 "data_offset": 0, 00:07:00.213 "data_size": 65536 00:07:00.213 } 00:07:00.213 ] 00:07:00.213 } 00:07:00.213 } 00:07:00.213 }' 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:00.213 BaseBdev2 00:07:00.213 BaseBdev3' 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.213 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.471 [2024-11-26 19:47:51.191924] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:00.471 [2024-11-26 19:47:51.191960] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:00.471 [2024-11-26 19:47:51.192057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.471 [2024-11-26 19:47:51.192128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:00.471 [2024-11-26 19:47:51.192152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64097 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64097 ']' 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64097 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64097 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.471 killing process with pid 64097 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64097' 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64097 00:07:00.471 [2024-11-26 19:47:51.226534] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:00.471 19:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64097 00:07:00.729 [2024-11-26 19:47:51.427570] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:01.295 19:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:01.295 00:07:01.295 real 0m7.898s 00:07:01.295 user 0m12.504s 00:07:01.295 sys 0m1.297s 00:07:01.295 19:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.295 ************************************ 00:07:01.295 END TEST raid_state_function_test 00:07:01.295 ************************************ 00:07:01.295 19:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.552 19:47:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:07:01.552 19:47:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:01.552 19:47:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.552 19:47:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:01.552 ************************************ 00:07:01.552 START TEST raid_state_function_test_sb 00:07:01.552 ************************************ 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:01.552 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:01.553 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64691 00:07:01.553 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64691' 00:07:01.553 Process raid pid: 64691 00:07:01.553 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:01.553 19:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64691 00:07:01.553 19:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64691 ']' 00:07:01.553 19:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.553 19:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.553 19:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.553 19:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.553 19:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.553 [2024-11-26 19:47:52.343640] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:01.553 [2024-11-26 19:47:52.343812] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.809 [2024-11-26 19:47:52.518601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.809 [2024-11-26 19:47:52.615890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.065 [2024-11-26 19:47:52.753262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.065 [2024-11-26 19:47:52.753304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.321 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.321 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:02.321 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:02.321 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.321 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.321 [2024-11-26 19:47:53.255888] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:02.321 [2024-11-26 19:47:53.255937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:02.321 [2024-11-26 19:47:53.255947] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.321 [2024-11-26 19:47:53.255958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.321 [2024-11-26 19:47:53.255964] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:02.321 [2024-11-26 19:47:53.255973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.578 "name": "Existed_Raid", 00:07:02.578 "uuid": "2d0c6393-f1f9-49da-a05c-e5348ff013ba", 00:07:02.578 "strip_size_kb": 64, 00:07:02.578 "state": "configuring", 00:07:02.578 "raid_level": "concat", 00:07:02.578 "superblock": true, 00:07:02.578 "num_base_bdevs": 3, 00:07:02.578 "num_base_bdevs_discovered": 0, 00:07:02.578 "num_base_bdevs_operational": 3, 00:07:02.578 "base_bdevs_list": [ 00:07:02.578 { 00:07:02.578 "name": "BaseBdev1", 00:07:02.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.578 "is_configured": false, 00:07:02.578 "data_offset": 0, 00:07:02.578 "data_size": 0 00:07:02.578 }, 00:07:02.578 { 00:07:02.578 "name": "BaseBdev2", 00:07:02.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.578 "is_configured": false, 00:07:02.578 "data_offset": 0, 00:07:02.578 "data_size": 0 00:07:02.578 }, 00:07:02.578 { 00:07:02.578 "name": "BaseBdev3", 00:07:02.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.578 "is_configured": false, 00:07:02.578 "data_offset": 0, 00:07:02.578 "data_size": 0 00:07:02.578 } 00:07:02.578 ] 00:07:02.578 }' 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.578 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.835 [2024-11-26 19:47:53.555894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:02.835 [2024-11-26 19:47:53.555930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.835 [2024-11-26 19:47:53.563909] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:02.835 [2024-11-26 19:47:53.563950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:02.835 [2024-11-26 19:47:53.563958] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.835 [2024-11-26 19:47:53.563967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.835 [2024-11-26 19:47:53.563973] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:02.835 [2024-11-26 19:47:53.563982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.835 [2024-11-26 19:47:53.596204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:02.835 BaseBdev1 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.835 [ 00:07:02.835 { 00:07:02.835 "name": "BaseBdev1", 00:07:02.835 "aliases": [ 00:07:02.835 "f4259ab2-0b0d-4b3e-9ebe-e52b02b1a48f" 00:07:02.835 ], 00:07:02.835 "product_name": "Malloc disk", 00:07:02.835 "block_size": 512, 00:07:02.835 "num_blocks": 65536, 00:07:02.835 "uuid": "f4259ab2-0b0d-4b3e-9ebe-e52b02b1a48f", 00:07:02.835 "assigned_rate_limits": { 00:07:02.835 "rw_ios_per_sec": 0, 00:07:02.835 "rw_mbytes_per_sec": 0, 00:07:02.835 "r_mbytes_per_sec": 0, 00:07:02.835 "w_mbytes_per_sec": 0 00:07:02.835 }, 00:07:02.835 "claimed": true, 00:07:02.835 "claim_type": "exclusive_write", 00:07:02.835 "zoned": false, 00:07:02.835 "supported_io_types": { 00:07:02.835 "read": true, 00:07:02.835 "write": true, 00:07:02.835 "unmap": true, 00:07:02.835 "flush": true, 00:07:02.835 "reset": true, 00:07:02.835 "nvme_admin": false, 00:07:02.835 "nvme_io": false, 00:07:02.835 "nvme_io_md": false, 00:07:02.835 "write_zeroes": true, 00:07:02.835 "zcopy": true, 00:07:02.835 "get_zone_info": false, 00:07:02.835 "zone_management": false, 00:07:02.835 "zone_append": false, 00:07:02.835 "compare": false, 00:07:02.835 "compare_and_write": false, 00:07:02.835 "abort": true, 00:07:02.835 "seek_hole": false, 00:07:02.835 "seek_data": false, 00:07:02.835 "copy": true, 00:07:02.835 "nvme_iov_md": false 00:07:02.835 }, 00:07:02.835 "memory_domains": [ 00:07:02.835 { 00:07:02.835 "dma_device_id": "system", 00:07:02.835 "dma_device_type": 1 00:07:02.835 }, 00:07:02.835 { 00:07:02.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.835 "dma_device_type": 2 00:07:02.835 } 00:07:02.835 ], 00:07:02.835 "driver_specific": {} 00:07:02.835 } 00:07:02.835 ] 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.835 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.835 "name": "Existed_Raid", 00:07:02.835 "uuid": "aee1a8f9-f894-4036-81d2-238f401193e6", 00:07:02.835 "strip_size_kb": 64, 00:07:02.836 "state": "configuring", 00:07:02.836 "raid_level": "concat", 00:07:02.836 "superblock": true, 00:07:02.836 "num_base_bdevs": 3, 00:07:02.836 "num_base_bdevs_discovered": 1, 00:07:02.836 "num_base_bdevs_operational": 3, 00:07:02.836 "base_bdevs_list": [ 00:07:02.836 { 00:07:02.836 "name": "BaseBdev1", 00:07:02.836 "uuid": "f4259ab2-0b0d-4b3e-9ebe-e52b02b1a48f", 00:07:02.836 "is_configured": true, 00:07:02.836 "data_offset": 2048, 00:07:02.836 "data_size": 63488 00:07:02.836 }, 00:07:02.836 { 00:07:02.836 "name": "BaseBdev2", 00:07:02.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.836 "is_configured": false, 00:07:02.836 "data_offset": 0, 00:07:02.836 "data_size": 0 00:07:02.836 }, 00:07:02.836 { 00:07:02.836 "name": "BaseBdev3", 00:07:02.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.836 "is_configured": false, 00:07:02.836 "data_offset": 0, 00:07:02.836 "data_size": 0 00:07:02.836 } 00:07:02.836 ] 00:07:02.836 }' 00:07:02.836 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.836 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.096 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:03.096 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.096 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.096 [2024-11-26 19:47:53.952362] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:03.096 [2024-11-26 19:47:53.952410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:03.096 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.096 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:03.096 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.096 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.096 [2024-11-26 19:47:53.960410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:03.096 [2024-11-26 19:47:53.962286] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:03.097 [2024-11-26 19:47:53.962329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:03.097 [2024-11-26 19:47:53.962350] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:03.097 [2024-11-26 19:47:53.962360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.097 19:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.097 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.097 "name": "Existed_Raid", 00:07:03.097 "uuid": "d87a7301-a47b-4bff-8d3c-db387be119b5", 00:07:03.097 "strip_size_kb": 64, 00:07:03.097 "state": "configuring", 00:07:03.097 "raid_level": "concat", 00:07:03.097 "superblock": true, 00:07:03.097 "num_base_bdevs": 3, 00:07:03.097 "num_base_bdevs_discovered": 1, 00:07:03.097 "num_base_bdevs_operational": 3, 00:07:03.097 "base_bdevs_list": [ 00:07:03.097 { 00:07:03.097 "name": "BaseBdev1", 00:07:03.097 "uuid": "f4259ab2-0b0d-4b3e-9ebe-e52b02b1a48f", 00:07:03.097 "is_configured": true, 00:07:03.097 "data_offset": 2048, 00:07:03.097 "data_size": 63488 00:07:03.097 }, 00:07:03.097 { 00:07:03.097 "name": "BaseBdev2", 00:07:03.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.097 "is_configured": false, 00:07:03.097 "data_offset": 0, 00:07:03.097 "data_size": 0 00:07:03.097 }, 00:07:03.097 { 00:07:03.097 "name": "BaseBdev3", 00:07:03.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.097 "is_configured": false, 00:07:03.097 "data_offset": 0, 00:07:03.097 "data_size": 0 00:07:03.097 } 00:07:03.097 ] 00:07:03.097 }' 00:07:03.097 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.097 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.663 [2024-11-26 19:47:54.322978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:03.663 BaseBdev2 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.663 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.663 [ 00:07:03.663 { 00:07:03.663 "name": "BaseBdev2", 00:07:03.663 "aliases": [ 00:07:03.663 "e894cee2-3380-494b-b5d0-d773a4cf4a28" 00:07:03.663 ], 00:07:03.663 "product_name": "Malloc disk", 00:07:03.663 "block_size": 512, 00:07:03.663 "num_blocks": 65536, 00:07:03.663 "uuid": "e894cee2-3380-494b-b5d0-d773a4cf4a28", 00:07:03.663 "assigned_rate_limits": { 00:07:03.663 "rw_ios_per_sec": 0, 00:07:03.663 "rw_mbytes_per_sec": 0, 00:07:03.663 "r_mbytes_per_sec": 0, 00:07:03.663 "w_mbytes_per_sec": 0 00:07:03.663 }, 00:07:03.663 "claimed": true, 00:07:03.663 "claim_type": "exclusive_write", 00:07:03.663 "zoned": false, 00:07:03.663 "supported_io_types": { 00:07:03.663 "read": true, 00:07:03.663 "write": true, 00:07:03.663 "unmap": true, 00:07:03.663 "flush": true, 00:07:03.663 "reset": true, 00:07:03.663 "nvme_admin": false, 00:07:03.663 "nvme_io": false, 00:07:03.663 "nvme_io_md": false, 00:07:03.663 "write_zeroes": true, 00:07:03.663 "zcopy": true, 00:07:03.663 "get_zone_info": false, 00:07:03.663 "zone_management": false, 00:07:03.663 "zone_append": false, 00:07:03.663 "compare": false, 00:07:03.663 "compare_and_write": false, 00:07:03.663 "abort": true, 00:07:03.663 "seek_hole": false, 00:07:03.663 "seek_data": false, 00:07:03.663 "copy": true, 00:07:03.663 "nvme_iov_md": false 00:07:03.663 }, 00:07:03.663 "memory_domains": [ 00:07:03.663 { 00:07:03.663 "dma_device_id": "system", 00:07:03.663 "dma_device_type": 1 00:07:03.663 }, 00:07:03.663 { 00:07:03.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.663 "dma_device_type": 2 00:07:03.663 } 00:07:03.663 ], 00:07:03.663 "driver_specific": {} 00:07:03.663 } 00:07:03.663 ] 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.664 "name": "Existed_Raid", 00:07:03.664 "uuid": "d87a7301-a47b-4bff-8d3c-db387be119b5", 00:07:03.664 "strip_size_kb": 64, 00:07:03.664 "state": "configuring", 00:07:03.664 "raid_level": "concat", 00:07:03.664 "superblock": true, 00:07:03.664 "num_base_bdevs": 3, 00:07:03.664 "num_base_bdevs_discovered": 2, 00:07:03.664 "num_base_bdevs_operational": 3, 00:07:03.664 "base_bdevs_list": [ 00:07:03.664 { 00:07:03.664 "name": "BaseBdev1", 00:07:03.664 "uuid": "f4259ab2-0b0d-4b3e-9ebe-e52b02b1a48f", 00:07:03.664 "is_configured": true, 00:07:03.664 "data_offset": 2048, 00:07:03.664 "data_size": 63488 00:07:03.664 }, 00:07:03.664 { 00:07:03.664 "name": "BaseBdev2", 00:07:03.664 "uuid": "e894cee2-3380-494b-b5d0-d773a4cf4a28", 00:07:03.664 "is_configured": true, 00:07:03.664 "data_offset": 2048, 00:07:03.664 "data_size": 63488 00:07:03.664 }, 00:07:03.664 { 00:07:03.664 "name": "BaseBdev3", 00:07:03.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.664 "is_configured": false, 00:07:03.664 "data_offset": 0, 00:07:03.664 "data_size": 0 00:07:03.664 } 00:07:03.664 ] 00:07:03.664 }' 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.664 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.923 [2024-11-26 19:47:54.769750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:03.923 [2024-11-26 19:47:54.769991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:03.923 BaseBdev3 00:07:03.923 [2024-11-26 19:47:54.770009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:03.923 [2024-11-26 19:47:54.770284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:03.923 [2024-11-26 19:47:54.770454] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:03.923 [2024-11-26 19:47:54.770465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:03.923 [2024-11-26 19:47:54.770596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.923 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.923 [ 00:07:03.923 { 00:07:03.923 "name": "BaseBdev3", 00:07:03.923 "aliases": [ 00:07:03.923 "f7e25d18-7335-493b-9708-2a8f3999e92c" 00:07:03.923 ], 00:07:03.923 "product_name": "Malloc disk", 00:07:03.923 "block_size": 512, 00:07:03.923 "num_blocks": 65536, 00:07:03.923 "uuid": "f7e25d18-7335-493b-9708-2a8f3999e92c", 00:07:03.923 "assigned_rate_limits": { 00:07:03.923 "rw_ios_per_sec": 0, 00:07:03.923 "rw_mbytes_per_sec": 0, 00:07:03.923 "r_mbytes_per_sec": 0, 00:07:03.923 "w_mbytes_per_sec": 0 00:07:03.923 }, 00:07:03.923 "claimed": true, 00:07:03.923 "claim_type": "exclusive_write", 00:07:03.923 "zoned": false, 00:07:03.923 "supported_io_types": { 00:07:03.923 "read": true, 00:07:03.923 "write": true, 00:07:03.923 "unmap": true, 00:07:03.923 "flush": true, 00:07:03.923 "reset": true, 00:07:03.923 "nvme_admin": false, 00:07:03.923 "nvme_io": false, 00:07:03.923 "nvme_io_md": false, 00:07:03.923 "write_zeroes": true, 00:07:03.923 "zcopy": true, 00:07:03.923 "get_zone_info": false, 00:07:03.923 "zone_management": false, 00:07:03.923 "zone_append": false, 00:07:03.923 "compare": false, 00:07:03.923 "compare_and_write": false, 00:07:03.923 "abort": true, 00:07:03.923 "seek_hole": false, 00:07:03.923 "seek_data": false, 00:07:03.923 "copy": true, 00:07:03.923 "nvme_iov_md": false 00:07:03.923 }, 00:07:03.923 "memory_domains": [ 00:07:03.923 { 00:07:03.923 "dma_device_id": "system", 00:07:03.923 "dma_device_type": 1 00:07:03.923 }, 00:07:03.923 { 00:07:03.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.923 "dma_device_type": 2 00:07:03.923 } 00:07:03.923 ], 00:07:03.923 "driver_specific": {} 00:07:03.923 } 00:07:03.923 ] 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.924 "name": "Existed_Raid", 00:07:03.924 "uuid": "d87a7301-a47b-4bff-8d3c-db387be119b5", 00:07:03.924 "strip_size_kb": 64, 00:07:03.924 "state": "online", 00:07:03.924 "raid_level": "concat", 00:07:03.924 "superblock": true, 00:07:03.924 "num_base_bdevs": 3, 00:07:03.924 "num_base_bdevs_discovered": 3, 00:07:03.924 "num_base_bdevs_operational": 3, 00:07:03.924 "base_bdevs_list": [ 00:07:03.924 { 00:07:03.924 "name": "BaseBdev1", 00:07:03.924 "uuid": "f4259ab2-0b0d-4b3e-9ebe-e52b02b1a48f", 00:07:03.924 "is_configured": true, 00:07:03.924 "data_offset": 2048, 00:07:03.924 "data_size": 63488 00:07:03.924 }, 00:07:03.924 { 00:07:03.924 "name": "BaseBdev2", 00:07:03.924 "uuid": "e894cee2-3380-494b-b5d0-d773a4cf4a28", 00:07:03.924 "is_configured": true, 00:07:03.924 "data_offset": 2048, 00:07:03.924 "data_size": 63488 00:07:03.924 }, 00:07:03.924 { 00:07:03.924 "name": "BaseBdev3", 00:07:03.924 "uuid": "f7e25d18-7335-493b-9708-2a8f3999e92c", 00:07:03.924 "is_configured": true, 00:07:03.924 "data_offset": 2048, 00:07:03.924 "data_size": 63488 00:07:03.924 } 00:07:03.924 ] 00:07:03.924 }' 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.924 19:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.488 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:04.488 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:04.488 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:04.488 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:04.488 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:04.488 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:04.488 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:04.488 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.488 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.488 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:04.488 [2024-11-26 19:47:55.130205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.488 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.488 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:04.488 "name": "Existed_Raid", 00:07:04.488 "aliases": [ 00:07:04.488 "d87a7301-a47b-4bff-8d3c-db387be119b5" 00:07:04.488 ], 00:07:04.488 "product_name": "Raid Volume", 00:07:04.488 "block_size": 512, 00:07:04.488 "num_blocks": 190464, 00:07:04.488 "uuid": "d87a7301-a47b-4bff-8d3c-db387be119b5", 00:07:04.488 "assigned_rate_limits": { 00:07:04.488 "rw_ios_per_sec": 0, 00:07:04.488 "rw_mbytes_per_sec": 0, 00:07:04.488 "r_mbytes_per_sec": 0, 00:07:04.488 "w_mbytes_per_sec": 0 00:07:04.488 }, 00:07:04.488 "claimed": false, 00:07:04.488 "zoned": false, 00:07:04.488 "supported_io_types": { 00:07:04.488 "read": true, 00:07:04.488 "write": true, 00:07:04.488 "unmap": true, 00:07:04.488 "flush": true, 00:07:04.488 "reset": true, 00:07:04.488 "nvme_admin": false, 00:07:04.488 "nvme_io": false, 00:07:04.488 "nvme_io_md": false, 00:07:04.488 "write_zeroes": true, 00:07:04.488 "zcopy": false, 00:07:04.488 "get_zone_info": false, 00:07:04.488 "zone_management": false, 00:07:04.488 "zone_append": false, 00:07:04.488 "compare": false, 00:07:04.488 "compare_and_write": false, 00:07:04.488 "abort": false, 00:07:04.488 "seek_hole": false, 00:07:04.488 "seek_data": false, 00:07:04.488 "copy": false, 00:07:04.488 "nvme_iov_md": false 00:07:04.488 }, 00:07:04.488 "memory_domains": [ 00:07:04.488 { 00:07:04.488 "dma_device_id": "system", 00:07:04.488 "dma_device_type": 1 00:07:04.488 }, 00:07:04.488 { 00:07:04.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.488 "dma_device_type": 2 00:07:04.488 }, 00:07:04.488 { 00:07:04.488 "dma_device_id": "system", 00:07:04.488 "dma_device_type": 1 00:07:04.488 }, 00:07:04.488 { 00:07:04.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.488 "dma_device_type": 2 00:07:04.488 }, 00:07:04.488 { 00:07:04.488 "dma_device_id": "system", 00:07:04.488 "dma_device_type": 1 00:07:04.488 }, 00:07:04.488 { 00:07:04.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.489 "dma_device_type": 2 00:07:04.489 } 00:07:04.489 ], 00:07:04.489 "driver_specific": { 00:07:04.489 "raid": { 00:07:04.489 "uuid": "d87a7301-a47b-4bff-8d3c-db387be119b5", 00:07:04.489 "strip_size_kb": 64, 00:07:04.489 "state": "online", 00:07:04.489 "raid_level": "concat", 00:07:04.489 "superblock": true, 00:07:04.489 "num_base_bdevs": 3, 00:07:04.489 "num_base_bdevs_discovered": 3, 00:07:04.489 "num_base_bdevs_operational": 3, 00:07:04.489 "base_bdevs_list": [ 00:07:04.489 { 00:07:04.489 "name": "BaseBdev1", 00:07:04.489 "uuid": "f4259ab2-0b0d-4b3e-9ebe-e52b02b1a48f", 00:07:04.489 "is_configured": true, 00:07:04.489 "data_offset": 2048, 00:07:04.489 "data_size": 63488 00:07:04.489 }, 00:07:04.489 { 00:07:04.489 "name": "BaseBdev2", 00:07:04.489 "uuid": "e894cee2-3380-494b-b5d0-d773a4cf4a28", 00:07:04.489 "is_configured": true, 00:07:04.489 "data_offset": 2048, 00:07:04.489 "data_size": 63488 00:07:04.489 }, 00:07:04.489 { 00:07:04.489 "name": "BaseBdev3", 00:07:04.489 "uuid": "f7e25d18-7335-493b-9708-2a8f3999e92c", 00:07:04.489 "is_configured": true, 00:07:04.489 "data_offset": 2048, 00:07:04.489 "data_size": 63488 00:07:04.489 } 00:07:04.489 ] 00:07:04.489 } 00:07:04.489 } 00:07:04.489 }' 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:04.489 BaseBdev2 00:07:04.489 BaseBdev3' 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.489 [2024-11-26 19:47:55.341949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:04.489 [2024-11-26 19:47:55.342079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:04.489 [2024-11-26 19:47:55.342145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.489 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.747 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.747 "name": "Existed_Raid", 00:07:04.747 "uuid": "d87a7301-a47b-4bff-8d3c-db387be119b5", 00:07:04.747 "strip_size_kb": 64, 00:07:04.747 "state": "offline", 00:07:04.747 "raid_level": "concat", 00:07:04.747 "superblock": true, 00:07:04.747 "num_base_bdevs": 3, 00:07:04.747 "num_base_bdevs_discovered": 2, 00:07:04.747 "num_base_bdevs_operational": 2, 00:07:04.747 "base_bdevs_list": [ 00:07:04.747 { 00:07:04.747 "name": null, 00:07:04.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.747 "is_configured": false, 00:07:04.747 "data_offset": 0, 00:07:04.747 "data_size": 63488 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "name": "BaseBdev2", 00:07:04.747 "uuid": "e894cee2-3380-494b-b5d0-d773a4cf4a28", 00:07:04.747 "is_configured": true, 00:07:04.747 "data_offset": 2048, 00:07:04.747 "data_size": 63488 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "name": "BaseBdev3", 00:07:04.747 "uuid": "f7e25d18-7335-493b-9708-2a8f3999e92c", 00:07:04.747 "is_configured": true, 00:07:04.747 "data_offset": 2048, 00:07:04.747 "data_size": 63488 00:07:04.747 } 00:07:04.747 ] 00:07:04.747 }' 00:07:04.747 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.747 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.006 [2024-11-26 19:47:55.765033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.006 [2024-11-26 19:47:55.864534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:05.006 [2024-11-26 19:47:55.864581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.006 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.265 BaseBdev2 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.265 19:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.265 [ 00:07:05.265 { 00:07:05.265 "name": "BaseBdev2", 00:07:05.265 "aliases": [ 00:07:05.265 "bbb3d650-3d55-459a-81f6-0f55bebec0c5" 00:07:05.265 ], 00:07:05.265 "product_name": "Malloc disk", 00:07:05.265 "block_size": 512, 00:07:05.265 "num_blocks": 65536, 00:07:05.265 "uuid": "bbb3d650-3d55-459a-81f6-0f55bebec0c5", 00:07:05.265 "assigned_rate_limits": { 00:07:05.265 "rw_ios_per_sec": 0, 00:07:05.265 "rw_mbytes_per_sec": 0, 00:07:05.265 "r_mbytes_per_sec": 0, 00:07:05.265 "w_mbytes_per_sec": 0 00:07:05.265 }, 00:07:05.265 "claimed": false, 00:07:05.265 "zoned": false, 00:07:05.265 "supported_io_types": { 00:07:05.265 "read": true, 00:07:05.265 "write": true, 00:07:05.265 "unmap": true, 00:07:05.265 "flush": true, 00:07:05.265 "reset": true, 00:07:05.265 "nvme_admin": false, 00:07:05.265 "nvme_io": false, 00:07:05.265 "nvme_io_md": false, 00:07:05.265 "write_zeroes": true, 00:07:05.265 "zcopy": true, 00:07:05.265 "get_zone_info": false, 00:07:05.265 "zone_management": false, 00:07:05.265 "zone_append": false, 00:07:05.265 "compare": false, 00:07:05.265 "compare_and_write": false, 00:07:05.265 "abort": true, 00:07:05.265 "seek_hole": false, 00:07:05.265 "seek_data": false, 00:07:05.265 "copy": true, 00:07:05.265 "nvme_iov_md": false 00:07:05.265 }, 00:07:05.265 "memory_domains": [ 00:07:05.265 { 00:07:05.265 "dma_device_id": "system", 00:07:05.265 "dma_device_type": 1 00:07:05.265 }, 00:07:05.265 { 00:07:05.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.265 "dma_device_type": 2 00:07:05.265 } 00:07:05.265 ], 00:07:05.265 "driver_specific": {} 00:07:05.265 } 00:07:05.265 ] 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.265 BaseBdev3 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.265 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.266 [ 00:07:05.266 { 00:07:05.266 "name": "BaseBdev3", 00:07:05.266 "aliases": [ 00:07:05.266 "1beeb7e9-8563-4456-8758-b0362f9ce242" 00:07:05.266 ], 00:07:05.266 "product_name": "Malloc disk", 00:07:05.266 "block_size": 512, 00:07:05.266 "num_blocks": 65536, 00:07:05.266 "uuid": "1beeb7e9-8563-4456-8758-b0362f9ce242", 00:07:05.266 "assigned_rate_limits": { 00:07:05.266 "rw_ios_per_sec": 0, 00:07:05.266 "rw_mbytes_per_sec": 0, 00:07:05.266 "r_mbytes_per_sec": 0, 00:07:05.266 "w_mbytes_per_sec": 0 00:07:05.266 }, 00:07:05.266 "claimed": false, 00:07:05.266 "zoned": false, 00:07:05.266 "supported_io_types": { 00:07:05.266 "read": true, 00:07:05.266 "write": true, 00:07:05.266 "unmap": true, 00:07:05.266 "flush": true, 00:07:05.266 "reset": true, 00:07:05.266 "nvme_admin": false, 00:07:05.266 "nvme_io": false, 00:07:05.266 "nvme_io_md": false, 00:07:05.266 "write_zeroes": true, 00:07:05.266 "zcopy": true, 00:07:05.266 "get_zone_info": false, 00:07:05.266 "zone_management": false, 00:07:05.266 "zone_append": false, 00:07:05.266 "compare": false, 00:07:05.266 "compare_and_write": false, 00:07:05.266 "abort": true, 00:07:05.266 "seek_hole": false, 00:07:05.266 "seek_data": false, 00:07:05.266 "copy": true, 00:07:05.266 "nvme_iov_md": false 00:07:05.266 }, 00:07:05.266 "memory_domains": [ 00:07:05.266 { 00:07:05.266 "dma_device_id": "system", 00:07:05.266 "dma_device_type": 1 00:07:05.266 }, 00:07:05.266 { 00:07:05.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.266 "dma_device_type": 2 00:07:05.266 } 00:07:05.266 ], 00:07:05.266 "driver_specific": {} 00:07:05.266 } 00:07:05.266 ] 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.266 [2024-11-26 19:47:56.070747] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:05.266 [2024-11-26 19:47:56.070903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:05.266 [2024-11-26 19:47:56.070999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:05.266 [2024-11-26 19:47:56.073262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.266 "name": "Existed_Raid", 00:07:05.266 "uuid": "026e7e0d-afba-4bf5-a484-5e285c69478c", 00:07:05.266 "strip_size_kb": 64, 00:07:05.266 "state": "configuring", 00:07:05.266 "raid_level": "concat", 00:07:05.266 "superblock": true, 00:07:05.266 "num_base_bdevs": 3, 00:07:05.266 "num_base_bdevs_discovered": 2, 00:07:05.266 "num_base_bdevs_operational": 3, 00:07:05.266 "base_bdevs_list": [ 00:07:05.266 { 00:07:05.266 "name": "BaseBdev1", 00:07:05.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.266 "is_configured": false, 00:07:05.266 "data_offset": 0, 00:07:05.266 "data_size": 0 00:07:05.266 }, 00:07:05.266 { 00:07:05.266 "name": "BaseBdev2", 00:07:05.266 "uuid": "bbb3d650-3d55-459a-81f6-0f55bebec0c5", 00:07:05.266 "is_configured": true, 00:07:05.266 "data_offset": 2048, 00:07:05.266 "data_size": 63488 00:07:05.266 }, 00:07:05.266 { 00:07:05.266 "name": "BaseBdev3", 00:07:05.266 "uuid": "1beeb7e9-8563-4456-8758-b0362f9ce242", 00:07:05.266 "is_configured": true, 00:07:05.266 "data_offset": 2048, 00:07:05.266 "data_size": 63488 00:07:05.266 } 00:07:05.266 ] 00:07:05.266 }' 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.266 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.525 [2024-11-26 19:47:56.410853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.525 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.782 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.782 "name": "Existed_Raid", 00:07:05.782 "uuid": "026e7e0d-afba-4bf5-a484-5e285c69478c", 00:07:05.782 "strip_size_kb": 64, 00:07:05.782 "state": "configuring", 00:07:05.782 "raid_level": "concat", 00:07:05.782 "superblock": true, 00:07:05.782 "num_base_bdevs": 3, 00:07:05.782 "num_base_bdevs_discovered": 1, 00:07:05.782 "num_base_bdevs_operational": 3, 00:07:05.782 "base_bdevs_list": [ 00:07:05.782 { 00:07:05.782 "name": "BaseBdev1", 00:07:05.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.782 "is_configured": false, 00:07:05.782 "data_offset": 0, 00:07:05.782 "data_size": 0 00:07:05.782 }, 00:07:05.782 { 00:07:05.782 "name": null, 00:07:05.782 "uuid": "bbb3d650-3d55-459a-81f6-0f55bebec0c5", 00:07:05.782 "is_configured": false, 00:07:05.782 "data_offset": 0, 00:07:05.782 "data_size": 63488 00:07:05.782 }, 00:07:05.782 { 00:07:05.782 "name": "BaseBdev3", 00:07:05.782 "uuid": "1beeb7e9-8563-4456-8758-b0362f9ce242", 00:07:05.782 "is_configured": true, 00:07:05.782 "data_offset": 2048, 00:07:05.782 "data_size": 63488 00:07:05.782 } 00:07:05.782 ] 00:07:05.782 }' 00:07:05.782 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.782 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.041 [2024-11-26 19:47:56.801218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.041 BaseBdev1 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.041 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.041 [ 00:07:06.041 { 00:07:06.041 "name": "BaseBdev1", 00:07:06.041 "aliases": [ 00:07:06.041 "4aac9f44-deb7-409d-a6ff-1c7839398b8e" 00:07:06.041 ], 00:07:06.041 "product_name": "Malloc disk", 00:07:06.041 "block_size": 512, 00:07:06.041 "num_blocks": 65536, 00:07:06.041 "uuid": "4aac9f44-deb7-409d-a6ff-1c7839398b8e", 00:07:06.041 "assigned_rate_limits": { 00:07:06.042 "rw_ios_per_sec": 0, 00:07:06.042 "rw_mbytes_per_sec": 0, 00:07:06.042 "r_mbytes_per_sec": 0, 00:07:06.042 "w_mbytes_per_sec": 0 00:07:06.042 }, 00:07:06.042 "claimed": true, 00:07:06.042 "claim_type": "exclusive_write", 00:07:06.042 "zoned": false, 00:07:06.042 "supported_io_types": { 00:07:06.042 "read": true, 00:07:06.042 "write": true, 00:07:06.042 "unmap": true, 00:07:06.042 "flush": true, 00:07:06.042 "reset": true, 00:07:06.042 "nvme_admin": false, 00:07:06.042 "nvme_io": false, 00:07:06.042 "nvme_io_md": false, 00:07:06.042 "write_zeroes": true, 00:07:06.042 "zcopy": true, 00:07:06.042 "get_zone_info": false, 00:07:06.042 "zone_management": false, 00:07:06.042 "zone_append": false, 00:07:06.042 "compare": false, 00:07:06.042 "compare_and_write": false, 00:07:06.042 "abort": true, 00:07:06.042 "seek_hole": false, 00:07:06.042 "seek_data": false, 00:07:06.042 "copy": true, 00:07:06.042 "nvme_iov_md": false 00:07:06.042 }, 00:07:06.042 "memory_domains": [ 00:07:06.042 { 00:07:06.042 "dma_device_id": "system", 00:07:06.042 "dma_device_type": 1 00:07:06.042 }, 00:07:06.042 { 00:07:06.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.042 "dma_device_type": 2 00:07:06.042 } 00:07:06.042 ], 00:07:06.042 "driver_specific": {} 00:07:06.042 } 00:07:06.042 ] 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.042 "name": "Existed_Raid", 00:07:06.042 "uuid": "026e7e0d-afba-4bf5-a484-5e285c69478c", 00:07:06.042 "strip_size_kb": 64, 00:07:06.042 "state": "configuring", 00:07:06.042 "raid_level": "concat", 00:07:06.042 "superblock": true, 00:07:06.042 "num_base_bdevs": 3, 00:07:06.042 "num_base_bdevs_discovered": 2, 00:07:06.042 "num_base_bdevs_operational": 3, 00:07:06.042 "base_bdevs_list": [ 00:07:06.042 { 00:07:06.042 "name": "BaseBdev1", 00:07:06.042 "uuid": "4aac9f44-deb7-409d-a6ff-1c7839398b8e", 00:07:06.042 "is_configured": true, 00:07:06.042 "data_offset": 2048, 00:07:06.042 "data_size": 63488 00:07:06.042 }, 00:07:06.042 { 00:07:06.042 "name": null, 00:07:06.042 "uuid": "bbb3d650-3d55-459a-81f6-0f55bebec0c5", 00:07:06.042 "is_configured": false, 00:07:06.042 "data_offset": 0, 00:07:06.042 "data_size": 63488 00:07:06.042 }, 00:07:06.042 { 00:07:06.042 "name": "BaseBdev3", 00:07:06.042 "uuid": "1beeb7e9-8563-4456-8758-b0362f9ce242", 00:07:06.042 "is_configured": true, 00:07:06.042 "data_offset": 2048, 00:07:06.042 "data_size": 63488 00:07:06.042 } 00:07:06.042 ] 00:07:06.042 }' 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.042 19:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.301 [2024-11-26 19:47:57.197381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.301 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.559 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.559 "name": "Existed_Raid", 00:07:06.559 "uuid": "026e7e0d-afba-4bf5-a484-5e285c69478c", 00:07:06.559 "strip_size_kb": 64, 00:07:06.559 "state": "configuring", 00:07:06.559 "raid_level": "concat", 00:07:06.559 "superblock": true, 00:07:06.559 "num_base_bdevs": 3, 00:07:06.559 "num_base_bdevs_discovered": 1, 00:07:06.559 "num_base_bdevs_operational": 3, 00:07:06.559 "base_bdevs_list": [ 00:07:06.559 { 00:07:06.559 "name": "BaseBdev1", 00:07:06.559 "uuid": "4aac9f44-deb7-409d-a6ff-1c7839398b8e", 00:07:06.559 "is_configured": true, 00:07:06.559 "data_offset": 2048, 00:07:06.559 "data_size": 63488 00:07:06.559 }, 00:07:06.559 { 00:07:06.559 "name": null, 00:07:06.559 "uuid": "bbb3d650-3d55-459a-81f6-0f55bebec0c5", 00:07:06.559 "is_configured": false, 00:07:06.559 "data_offset": 0, 00:07:06.559 "data_size": 63488 00:07:06.559 }, 00:07:06.559 { 00:07:06.559 "name": null, 00:07:06.559 "uuid": "1beeb7e9-8563-4456-8758-b0362f9ce242", 00:07:06.559 "is_configured": false, 00:07:06.559 "data_offset": 0, 00:07:06.559 "data_size": 63488 00:07:06.559 } 00:07:06.559 ] 00:07:06.559 }' 00:07:06.559 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.559 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.816 [2024-11-26 19:47:57.549487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.816 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.816 "name": "Existed_Raid", 00:07:06.816 "uuid": "026e7e0d-afba-4bf5-a484-5e285c69478c", 00:07:06.817 "strip_size_kb": 64, 00:07:06.817 "state": "configuring", 00:07:06.817 "raid_level": "concat", 00:07:06.817 "superblock": true, 00:07:06.817 "num_base_bdevs": 3, 00:07:06.817 "num_base_bdevs_discovered": 2, 00:07:06.817 "num_base_bdevs_operational": 3, 00:07:06.817 "base_bdevs_list": [ 00:07:06.817 { 00:07:06.817 "name": "BaseBdev1", 00:07:06.817 "uuid": "4aac9f44-deb7-409d-a6ff-1c7839398b8e", 00:07:06.817 "is_configured": true, 00:07:06.817 "data_offset": 2048, 00:07:06.817 "data_size": 63488 00:07:06.817 }, 00:07:06.817 { 00:07:06.817 "name": null, 00:07:06.817 "uuid": "bbb3d650-3d55-459a-81f6-0f55bebec0c5", 00:07:06.817 "is_configured": false, 00:07:06.817 "data_offset": 0, 00:07:06.817 "data_size": 63488 00:07:06.817 }, 00:07:06.817 { 00:07:06.817 "name": "BaseBdev3", 00:07:06.817 "uuid": "1beeb7e9-8563-4456-8758-b0362f9ce242", 00:07:06.817 "is_configured": true, 00:07:06.817 "data_offset": 2048, 00:07:06.817 "data_size": 63488 00:07:06.817 } 00:07:06.817 ] 00:07:06.817 }' 00:07:06.817 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.817 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.074 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.074 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:07.074 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.075 [2024-11-26 19:47:57.917577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.075 19:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.075 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.075 "name": "Existed_Raid", 00:07:07.075 "uuid": "026e7e0d-afba-4bf5-a484-5e285c69478c", 00:07:07.075 "strip_size_kb": 64, 00:07:07.075 "state": "configuring", 00:07:07.075 "raid_level": "concat", 00:07:07.075 "superblock": true, 00:07:07.075 "num_base_bdevs": 3, 00:07:07.075 "num_base_bdevs_discovered": 1, 00:07:07.075 "num_base_bdevs_operational": 3, 00:07:07.075 "base_bdevs_list": [ 00:07:07.075 { 00:07:07.075 "name": null, 00:07:07.075 "uuid": "4aac9f44-deb7-409d-a6ff-1c7839398b8e", 00:07:07.075 "is_configured": false, 00:07:07.075 "data_offset": 0, 00:07:07.075 "data_size": 63488 00:07:07.075 }, 00:07:07.075 { 00:07:07.075 "name": null, 00:07:07.075 "uuid": "bbb3d650-3d55-459a-81f6-0f55bebec0c5", 00:07:07.075 "is_configured": false, 00:07:07.075 "data_offset": 0, 00:07:07.075 "data_size": 63488 00:07:07.075 }, 00:07:07.075 { 00:07:07.075 "name": "BaseBdev3", 00:07:07.075 "uuid": "1beeb7e9-8563-4456-8758-b0362f9ce242", 00:07:07.075 "is_configured": true, 00:07:07.075 "data_offset": 2048, 00:07:07.075 "data_size": 63488 00:07:07.075 } 00:07:07.075 ] 00:07:07.075 }' 00:07:07.075 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.075 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.640 [2024-11-26 19:47:58.312365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.640 "name": "Existed_Raid", 00:07:07.640 "uuid": "026e7e0d-afba-4bf5-a484-5e285c69478c", 00:07:07.640 "strip_size_kb": 64, 00:07:07.640 "state": "configuring", 00:07:07.640 "raid_level": "concat", 00:07:07.640 "superblock": true, 00:07:07.640 "num_base_bdevs": 3, 00:07:07.640 "num_base_bdevs_discovered": 2, 00:07:07.640 "num_base_bdevs_operational": 3, 00:07:07.640 "base_bdevs_list": [ 00:07:07.640 { 00:07:07.640 "name": null, 00:07:07.640 "uuid": "4aac9f44-deb7-409d-a6ff-1c7839398b8e", 00:07:07.640 "is_configured": false, 00:07:07.640 "data_offset": 0, 00:07:07.640 "data_size": 63488 00:07:07.640 }, 00:07:07.640 { 00:07:07.640 "name": "BaseBdev2", 00:07:07.640 "uuid": "bbb3d650-3d55-459a-81f6-0f55bebec0c5", 00:07:07.640 "is_configured": true, 00:07:07.640 "data_offset": 2048, 00:07:07.640 "data_size": 63488 00:07:07.640 }, 00:07:07.640 { 00:07:07.640 "name": "BaseBdev3", 00:07:07.640 "uuid": "1beeb7e9-8563-4456-8758-b0362f9ce242", 00:07:07.640 "is_configured": true, 00:07:07.640 "data_offset": 2048, 00:07:07.640 "data_size": 63488 00:07:07.640 } 00:07:07.640 ] 00:07:07.640 }' 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.640 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4aac9f44-deb7-409d-a6ff-1c7839398b8e 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.899 [2024-11-26 19:47:58.761437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:07.899 [2024-11-26 19:47:58.761697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:07.899 [2024-11-26 19:47:58.761714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:07.899 [2024-11-26 19:47:58.761976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:07.899 [2024-11-26 19:47:58.762108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:07.899 [2024-11-26 19:47:58.762117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:07.899 NewBaseBdev 00:07:07.899 [2024-11-26 19:47:58.762249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.899 [ 00:07:07.899 { 00:07:07.899 "name": "NewBaseBdev", 00:07:07.899 "aliases": [ 00:07:07.899 "4aac9f44-deb7-409d-a6ff-1c7839398b8e" 00:07:07.899 ], 00:07:07.899 "product_name": "Malloc disk", 00:07:07.899 "block_size": 512, 00:07:07.899 "num_blocks": 65536, 00:07:07.899 "uuid": "4aac9f44-deb7-409d-a6ff-1c7839398b8e", 00:07:07.899 "assigned_rate_limits": { 00:07:07.899 "rw_ios_per_sec": 0, 00:07:07.899 "rw_mbytes_per_sec": 0, 00:07:07.899 "r_mbytes_per_sec": 0, 00:07:07.899 "w_mbytes_per_sec": 0 00:07:07.899 }, 00:07:07.899 "claimed": true, 00:07:07.899 "claim_type": "exclusive_write", 00:07:07.899 "zoned": false, 00:07:07.899 "supported_io_types": { 00:07:07.899 "read": true, 00:07:07.899 "write": true, 00:07:07.899 "unmap": true, 00:07:07.899 "flush": true, 00:07:07.899 "reset": true, 00:07:07.899 "nvme_admin": false, 00:07:07.899 "nvme_io": false, 00:07:07.899 "nvme_io_md": false, 00:07:07.899 "write_zeroes": true, 00:07:07.899 "zcopy": true, 00:07:07.899 "get_zone_info": false, 00:07:07.899 "zone_management": false, 00:07:07.899 "zone_append": false, 00:07:07.899 "compare": false, 00:07:07.899 "compare_and_write": false, 00:07:07.899 "abort": true, 00:07:07.899 "seek_hole": false, 00:07:07.899 "seek_data": false, 00:07:07.899 "copy": true, 00:07:07.899 "nvme_iov_md": false 00:07:07.899 }, 00:07:07.899 "memory_domains": [ 00:07:07.899 { 00:07:07.899 "dma_device_id": "system", 00:07:07.899 "dma_device_type": 1 00:07:07.899 }, 00:07:07.899 { 00:07:07.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.899 "dma_device_type": 2 00:07:07.899 } 00:07:07.899 ], 00:07:07.899 "driver_specific": {} 00:07:07.899 } 00:07:07.899 ] 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.899 "name": "Existed_Raid", 00:07:07.899 "uuid": "026e7e0d-afba-4bf5-a484-5e285c69478c", 00:07:07.899 "strip_size_kb": 64, 00:07:07.899 "state": "online", 00:07:07.899 "raid_level": "concat", 00:07:07.899 "superblock": true, 00:07:07.899 "num_base_bdevs": 3, 00:07:07.899 "num_base_bdevs_discovered": 3, 00:07:07.899 "num_base_bdevs_operational": 3, 00:07:07.899 "base_bdevs_list": [ 00:07:07.899 { 00:07:07.899 "name": "NewBaseBdev", 00:07:07.899 "uuid": "4aac9f44-deb7-409d-a6ff-1c7839398b8e", 00:07:07.899 "is_configured": true, 00:07:07.899 "data_offset": 2048, 00:07:07.899 "data_size": 63488 00:07:07.899 }, 00:07:07.899 { 00:07:07.899 "name": "BaseBdev2", 00:07:07.899 "uuid": "bbb3d650-3d55-459a-81f6-0f55bebec0c5", 00:07:07.899 "is_configured": true, 00:07:07.899 "data_offset": 2048, 00:07:07.899 "data_size": 63488 00:07:07.899 }, 00:07:07.899 { 00:07:07.899 "name": "BaseBdev3", 00:07:07.899 "uuid": "1beeb7e9-8563-4456-8758-b0362f9ce242", 00:07:07.899 "is_configured": true, 00:07:07.899 "data_offset": 2048, 00:07:07.899 "data_size": 63488 00:07:07.899 } 00:07:07.899 ] 00:07:07.899 }' 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.899 19:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.157 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:08.157 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:08.157 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:08.157 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:08.157 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:08.157 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:08.157 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:08.157 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:08.157 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.157 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.417 [2024-11-26 19:47:59.093892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:08.417 "name": "Existed_Raid", 00:07:08.417 "aliases": [ 00:07:08.417 "026e7e0d-afba-4bf5-a484-5e285c69478c" 00:07:08.417 ], 00:07:08.417 "product_name": "Raid Volume", 00:07:08.417 "block_size": 512, 00:07:08.417 "num_blocks": 190464, 00:07:08.417 "uuid": "026e7e0d-afba-4bf5-a484-5e285c69478c", 00:07:08.417 "assigned_rate_limits": { 00:07:08.417 "rw_ios_per_sec": 0, 00:07:08.417 "rw_mbytes_per_sec": 0, 00:07:08.417 "r_mbytes_per_sec": 0, 00:07:08.417 "w_mbytes_per_sec": 0 00:07:08.417 }, 00:07:08.417 "claimed": false, 00:07:08.417 "zoned": false, 00:07:08.417 "supported_io_types": { 00:07:08.417 "read": true, 00:07:08.417 "write": true, 00:07:08.417 "unmap": true, 00:07:08.417 "flush": true, 00:07:08.417 "reset": true, 00:07:08.417 "nvme_admin": false, 00:07:08.417 "nvme_io": false, 00:07:08.417 "nvme_io_md": false, 00:07:08.417 "write_zeroes": true, 00:07:08.417 "zcopy": false, 00:07:08.417 "get_zone_info": false, 00:07:08.417 "zone_management": false, 00:07:08.417 "zone_append": false, 00:07:08.417 "compare": false, 00:07:08.417 "compare_and_write": false, 00:07:08.417 "abort": false, 00:07:08.417 "seek_hole": false, 00:07:08.417 "seek_data": false, 00:07:08.417 "copy": false, 00:07:08.417 "nvme_iov_md": false 00:07:08.417 }, 00:07:08.417 "memory_domains": [ 00:07:08.417 { 00:07:08.417 "dma_device_id": "system", 00:07:08.417 "dma_device_type": 1 00:07:08.417 }, 00:07:08.417 { 00:07:08.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.417 "dma_device_type": 2 00:07:08.417 }, 00:07:08.417 { 00:07:08.417 "dma_device_id": "system", 00:07:08.417 "dma_device_type": 1 00:07:08.417 }, 00:07:08.417 { 00:07:08.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.417 "dma_device_type": 2 00:07:08.417 }, 00:07:08.417 { 00:07:08.417 "dma_device_id": "system", 00:07:08.417 "dma_device_type": 1 00:07:08.417 }, 00:07:08.417 { 00:07:08.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.417 "dma_device_type": 2 00:07:08.417 } 00:07:08.417 ], 00:07:08.417 "driver_specific": { 00:07:08.417 "raid": { 00:07:08.417 "uuid": "026e7e0d-afba-4bf5-a484-5e285c69478c", 00:07:08.417 "strip_size_kb": 64, 00:07:08.417 "state": "online", 00:07:08.417 "raid_level": "concat", 00:07:08.417 "superblock": true, 00:07:08.417 "num_base_bdevs": 3, 00:07:08.417 "num_base_bdevs_discovered": 3, 00:07:08.417 "num_base_bdevs_operational": 3, 00:07:08.417 "base_bdevs_list": [ 00:07:08.417 { 00:07:08.417 "name": "NewBaseBdev", 00:07:08.417 "uuid": "4aac9f44-deb7-409d-a6ff-1c7839398b8e", 00:07:08.417 "is_configured": true, 00:07:08.417 "data_offset": 2048, 00:07:08.417 "data_size": 63488 00:07:08.417 }, 00:07:08.417 { 00:07:08.417 "name": "BaseBdev2", 00:07:08.417 "uuid": "bbb3d650-3d55-459a-81f6-0f55bebec0c5", 00:07:08.417 "is_configured": true, 00:07:08.417 "data_offset": 2048, 00:07:08.417 "data_size": 63488 00:07:08.417 }, 00:07:08.417 { 00:07:08.417 "name": "BaseBdev3", 00:07:08.417 "uuid": "1beeb7e9-8563-4456-8758-b0362f9ce242", 00:07:08.417 "is_configured": true, 00:07:08.417 "data_offset": 2048, 00:07:08.417 "data_size": 63488 00:07:08.417 } 00:07:08.417 ] 00:07:08.417 } 00:07:08.417 } 00:07:08.417 }' 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:08.417 BaseBdev2 00:07:08.417 BaseBdev3' 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.417 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.418 [2024-11-26 19:47:59.313612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:08.418 [2024-11-26 19:47:59.313653] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.418 [2024-11-26 19:47:59.313746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.418 [2024-11-26 19:47:59.313819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.418 [2024-11-26 19:47:59.313833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64691 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64691 ']' 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64691 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64691 00:07:08.418 killing process with pid 64691 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64691' 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64691 00:07:08.418 19:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64691 00:07:08.418 [2024-11-26 19:47:59.341587] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.675 [2024-11-26 19:47:59.541895] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.609 ************************************ 00:07:09.609 END TEST raid_state_function_test_sb 00:07:09.609 ************************************ 00:07:09.609 19:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:09.609 00:07:09.609 real 0m8.070s 00:07:09.609 user 0m12.800s 00:07:09.609 sys 0m1.337s 00:07:09.609 19:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.609 19:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.609 19:48:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:07:09.609 19:48:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:09.609 19:48:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.609 19:48:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.609 ************************************ 00:07:09.609 START TEST raid_superblock_test 00:07:09.609 ************************************ 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:09.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65289 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65289 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65289 ']' 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.609 19:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:09.609 [2024-11-26 19:48:00.437803] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:09.609 [2024-11-26 19:48:00.438149] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65289 ] 00:07:09.867 [2024-11-26 19:48:00.601219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.867 [2024-11-26 19:48:00.722672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.125 [2024-11-26 19:48:00.872726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.125 [2024-11-26 19:48:00.872786] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.693 malloc1 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.693 [2024-11-26 19:48:01.385506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:10.693 [2024-11-26 19:48:01.385745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.693 [2024-11-26 19:48:01.385778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:10.693 [2024-11-26 19:48:01.385790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.693 [2024-11-26 19:48:01.388232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.693 [2024-11-26 19:48:01.388275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:10.693 pt1 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.693 malloc2 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.693 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.693 [2024-11-26 19:48:01.427917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:10.693 [2024-11-26 19:48:01.427996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.693 [2024-11-26 19:48:01.428026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:10.693 [2024-11-26 19:48:01.428037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.693 [2024-11-26 19:48:01.430378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.693 [2024-11-26 19:48:01.430414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:10.693 pt2 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.694 malloc3 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.694 [2024-11-26 19:48:01.479818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:10.694 [2024-11-26 19:48:01.480061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.694 [2024-11-26 19:48:01.480097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:10.694 [2024-11-26 19:48:01.480107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.694 [2024-11-26 19:48:01.482489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.694 [2024-11-26 19:48:01.482529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:10.694 pt3 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.694 [2024-11-26 19:48:01.487865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:10.694 [2024-11-26 19:48:01.489856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:10.694 [2024-11-26 19:48:01.489930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:10.694 [2024-11-26 19:48:01.490111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:10.694 [2024-11-26 19:48:01.490124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:10.694 [2024-11-26 19:48:01.490446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:10.694 [2024-11-26 19:48:01.490609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:10.694 [2024-11-26 19:48:01.490619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:10.694 [2024-11-26 19:48:01.490782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.694 "name": "raid_bdev1", 00:07:10.694 "uuid": "36a58ec4-59db-4e90-b85e-819d90c15af7", 00:07:10.694 "strip_size_kb": 64, 00:07:10.694 "state": "online", 00:07:10.694 "raid_level": "concat", 00:07:10.694 "superblock": true, 00:07:10.694 "num_base_bdevs": 3, 00:07:10.694 "num_base_bdevs_discovered": 3, 00:07:10.694 "num_base_bdevs_operational": 3, 00:07:10.694 "base_bdevs_list": [ 00:07:10.694 { 00:07:10.694 "name": "pt1", 00:07:10.694 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.694 "is_configured": true, 00:07:10.694 "data_offset": 2048, 00:07:10.694 "data_size": 63488 00:07:10.694 }, 00:07:10.694 { 00:07:10.694 "name": "pt2", 00:07:10.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.694 "is_configured": true, 00:07:10.694 "data_offset": 2048, 00:07:10.694 "data_size": 63488 00:07:10.694 }, 00:07:10.694 { 00:07:10.694 "name": "pt3", 00:07:10.694 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:10.694 "is_configured": true, 00:07:10.694 "data_offset": 2048, 00:07:10.694 "data_size": 63488 00:07:10.694 } 00:07:10.694 ] 00:07:10.694 }' 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.694 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:10.953 [2024-11-26 19:48:01.816257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:10.953 "name": "raid_bdev1", 00:07:10.953 "aliases": [ 00:07:10.953 "36a58ec4-59db-4e90-b85e-819d90c15af7" 00:07:10.953 ], 00:07:10.953 "product_name": "Raid Volume", 00:07:10.953 "block_size": 512, 00:07:10.953 "num_blocks": 190464, 00:07:10.953 "uuid": "36a58ec4-59db-4e90-b85e-819d90c15af7", 00:07:10.953 "assigned_rate_limits": { 00:07:10.953 "rw_ios_per_sec": 0, 00:07:10.953 "rw_mbytes_per_sec": 0, 00:07:10.953 "r_mbytes_per_sec": 0, 00:07:10.953 "w_mbytes_per_sec": 0 00:07:10.953 }, 00:07:10.953 "claimed": false, 00:07:10.953 "zoned": false, 00:07:10.953 "supported_io_types": { 00:07:10.953 "read": true, 00:07:10.953 "write": true, 00:07:10.953 "unmap": true, 00:07:10.953 "flush": true, 00:07:10.953 "reset": true, 00:07:10.953 "nvme_admin": false, 00:07:10.953 "nvme_io": false, 00:07:10.953 "nvme_io_md": false, 00:07:10.953 "write_zeroes": true, 00:07:10.953 "zcopy": false, 00:07:10.953 "get_zone_info": false, 00:07:10.953 "zone_management": false, 00:07:10.953 "zone_append": false, 00:07:10.953 "compare": false, 00:07:10.953 "compare_and_write": false, 00:07:10.953 "abort": false, 00:07:10.953 "seek_hole": false, 00:07:10.953 "seek_data": false, 00:07:10.953 "copy": false, 00:07:10.953 "nvme_iov_md": false 00:07:10.953 }, 00:07:10.953 "memory_domains": [ 00:07:10.953 { 00:07:10.953 "dma_device_id": "system", 00:07:10.953 "dma_device_type": 1 00:07:10.953 }, 00:07:10.953 { 00:07:10.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.953 "dma_device_type": 2 00:07:10.953 }, 00:07:10.953 { 00:07:10.953 "dma_device_id": "system", 00:07:10.953 "dma_device_type": 1 00:07:10.953 }, 00:07:10.953 { 00:07:10.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.953 "dma_device_type": 2 00:07:10.953 }, 00:07:10.953 { 00:07:10.953 "dma_device_id": "system", 00:07:10.953 "dma_device_type": 1 00:07:10.953 }, 00:07:10.953 { 00:07:10.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.953 "dma_device_type": 2 00:07:10.953 } 00:07:10.953 ], 00:07:10.953 "driver_specific": { 00:07:10.953 "raid": { 00:07:10.953 "uuid": "36a58ec4-59db-4e90-b85e-819d90c15af7", 00:07:10.953 "strip_size_kb": 64, 00:07:10.953 "state": "online", 00:07:10.953 "raid_level": "concat", 00:07:10.953 "superblock": true, 00:07:10.953 "num_base_bdevs": 3, 00:07:10.953 "num_base_bdevs_discovered": 3, 00:07:10.953 "num_base_bdevs_operational": 3, 00:07:10.953 "base_bdevs_list": [ 00:07:10.953 { 00:07:10.953 "name": "pt1", 00:07:10.953 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.953 "is_configured": true, 00:07:10.953 "data_offset": 2048, 00:07:10.953 "data_size": 63488 00:07:10.953 }, 00:07:10.953 { 00:07:10.953 "name": "pt2", 00:07:10.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.953 "is_configured": true, 00:07:10.953 "data_offset": 2048, 00:07:10.953 "data_size": 63488 00:07:10.953 }, 00:07:10.953 { 00:07:10.953 "name": "pt3", 00:07:10.953 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:10.953 "is_configured": true, 00:07:10.953 "data_offset": 2048, 00:07:10.953 "data_size": 63488 00:07:10.953 } 00:07:10.953 ] 00:07:10.953 } 00:07:10.953 } 00:07:10.953 }' 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:10.953 pt2 00:07:10.953 pt3' 00:07:10.953 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.249 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:11.249 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.249 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:11.249 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.249 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.249 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.249 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.249 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.250 19:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:11.250 [2024-11-26 19:48:02.028293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=36a58ec4-59db-4e90-b85e-819d90c15af7 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 36a58ec4-59db-4e90-b85e-819d90c15af7 ']' 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.250 [2024-11-26 19:48:02.059965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:11.250 [2024-11-26 19:48:02.060009] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:11.250 [2024-11-26 19:48:02.060103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.250 [2024-11-26 19:48:02.060191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.250 [2024-11-26 19:48:02.060203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.250 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.535 [2024-11-26 19:48:02.172058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:11.535 [2024-11-26 19:48:02.174132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:11.535 [2024-11-26 19:48:02.174329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:11.535 [2024-11-26 19:48:02.174412] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:11.535 [2024-11-26 19:48:02.174474] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:11.535 [2024-11-26 19:48:02.174494] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:07:11.535 [2024-11-26 19:48:02.174512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:11.535 [2024-11-26 19:48:02.174523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:11.535 request: 00:07:11.535 { 00:07:11.535 "name": "raid_bdev1", 00:07:11.535 "raid_level": "concat", 00:07:11.535 "base_bdevs": [ 00:07:11.535 "malloc1", 00:07:11.535 "malloc2", 00:07:11.535 "malloc3" 00:07:11.535 ], 00:07:11.535 "strip_size_kb": 64, 00:07:11.535 "superblock": false, 00:07:11.535 "method": "bdev_raid_create", 00:07:11.535 "req_id": 1 00:07:11.535 } 00:07:11.535 Got JSON-RPC error response 00:07:11.535 response: 00:07:11.535 { 00:07:11.535 "code": -17, 00:07:11.535 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:11.535 } 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.535 [2024-11-26 19:48:02.216008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:11.535 [2024-11-26 19:48:02.216095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.535 [2024-11-26 19:48:02.216118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:11.535 [2024-11-26 19:48:02.216128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.535 [2024-11-26 19:48:02.218786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.535 [2024-11-26 19:48:02.218831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:11.535 [2024-11-26 19:48:02.218953] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:11.535 [2024-11-26 19:48:02.219007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:11.535 pt1 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.535 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.535 "name": "raid_bdev1", 00:07:11.535 "uuid": "36a58ec4-59db-4e90-b85e-819d90c15af7", 00:07:11.535 "strip_size_kb": 64, 00:07:11.535 "state": "configuring", 00:07:11.535 "raid_level": "concat", 00:07:11.535 "superblock": true, 00:07:11.535 "num_base_bdevs": 3, 00:07:11.535 "num_base_bdevs_discovered": 1, 00:07:11.535 "num_base_bdevs_operational": 3, 00:07:11.535 "base_bdevs_list": [ 00:07:11.535 { 00:07:11.535 "name": "pt1", 00:07:11.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:11.535 "is_configured": true, 00:07:11.535 "data_offset": 2048, 00:07:11.535 "data_size": 63488 00:07:11.535 }, 00:07:11.535 { 00:07:11.535 "name": null, 00:07:11.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:11.535 "is_configured": false, 00:07:11.535 "data_offset": 2048, 00:07:11.535 "data_size": 63488 00:07:11.535 }, 00:07:11.535 { 00:07:11.536 "name": null, 00:07:11.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:11.536 "is_configured": false, 00:07:11.536 "data_offset": 2048, 00:07:11.536 "data_size": 63488 00:07:11.536 } 00:07:11.536 ] 00:07:11.536 }' 00:07:11.536 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.536 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.795 [2024-11-26 19:48:02.540090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:11.795 [2024-11-26 19:48:02.540340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.795 [2024-11-26 19:48:02.540386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:07:11.795 [2024-11-26 19:48:02.540396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.795 [2024-11-26 19:48:02.540864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.795 [2024-11-26 19:48:02.540880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:11.795 [2024-11-26 19:48:02.540973] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:11.795 [2024-11-26 19:48:02.540999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:11.795 pt2 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.795 [2024-11-26 19:48:02.548092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.795 "name": "raid_bdev1", 00:07:11.795 "uuid": "36a58ec4-59db-4e90-b85e-819d90c15af7", 00:07:11.795 "strip_size_kb": 64, 00:07:11.795 "state": "configuring", 00:07:11.795 "raid_level": "concat", 00:07:11.795 "superblock": true, 00:07:11.795 "num_base_bdevs": 3, 00:07:11.795 "num_base_bdevs_discovered": 1, 00:07:11.795 "num_base_bdevs_operational": 3, 00:07:11.795 "base_bdevs_list": [ 00:07:11.795 { 00:07:11.795 "name": "pt1", 00:07:11.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:11.795 "is_configured": true, 00:07:11.795 "data_offset": 2048, 00:07:11.795 "data_size": 63488 00:07:11.795 }, 00:07:11.795 { 00:07:11.795 "name": null, 00:07:11.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:11.795 "is_configured": false, 00:07:11.795 "data_offset": 0, 00:07:11.795 "data_size": 63488 00:07:11.795 }, 00:07:11.795 { 00:07:11.795 "name": null, 00:07:11.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:11.795 "is_configured": false, 00:07:11.795 "data_offset": 2048, 00:07:11.795 "data_size": 63488 00:07:11.795 } 00:07:11.795 ] 00:07:11.795 }' 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.795 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.054 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:12.054 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:12.054 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:12.054 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.054 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.054 [2024-11-26 19:48:02.880143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:12.054 [2024-11-26 19:48:02.880232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.054 [2024-11-26 19:48:02.880251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:07:12.054 [2024-11-26 19:48:02.880263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.054 [2024-11-26 19:48:02.880766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.054 [2024-11-26 19:48:02.880796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:12.054 [2024-11-26 19:48:02.880883] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:12.054 [2024-11-26 19:48:02.880908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:12.054 pt2 00:07:12.054 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.054 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:12.054 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:12.054 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:12.054 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.055 [2024-11-26 19:48:02.892169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:12.055 [2024-11-26 19:48:02.892241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.055 [2024-11-26 19:48:02.892259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:07:12.055 [2024-11-26 19:48:02.892271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.055 [2024-11-26 19:48:02.892764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.055 [2024-11-26 19:48:02.892795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:12.055 [2024-11-26 19:48:02.892891] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:12.055 [2024-11-26 19:48:02.892916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:12.055 [2024-11-26 19:48:02.893066] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:12.055 [2024-11-26 19:48:02.893085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:12.055 [2024-11-26 19:48:02.893368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:12.055 [2024-11-26 19:48:02.893529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:12.055 [2024-11-26 19:48:02.893543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:12.055 [2024-11-26 19:48:02.893682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.055 pt3 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.055 "name": "raid_bdev1", 00:07:12.055 "uuid": "36a58ec4-59db-4e90-b85e-819d90c15af7", 00:07:12.055 "strip_size_kb": 64, 00:07:12.055 "state": "online", 00:07:12.055 "raid_level": "concat", 00:07:12.055 "superblock": true, 00:07:12.055 "num_base_bdevs": 3, 00:07:12.055 "num_base_bdevs_discovered": 3, 00:07:12.055 "num_base_bdevs_operational": 3, 00:07:12.055 "base_bdevs_list": [ 00:07:12.055 { 00:07:12.055 "name": "pt1", 00:07:12.055 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:12.055 "is_configured": true, 00:07:12.055 "data_offset": 2048, 00:07:12.055 "data_size": 63488 00:07:12.055 }, 00:07:12.055 { 00:07:12.055 "name": "pt2", 00:07:12.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:12.055 "is_configured": true, 00:07:12.055 "data_offset": 2048, 00:07:12.055 "data_size": 63488 00:07:12.055 }, 00:07:12.055 { 00:07:12.055 "name": "pt3", 00:07:12.055 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:12.055 "is_configured": true, 00:07:12.055 "data_offset": 2048, 00:07:12.055 "data_size": 63488 00:07:12.055 } 00:07:12.055 ] 00:07:12.055 }' 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.055 19:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.312 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:12.313 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:12.313 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:12.313 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:12.313 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:12.313 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:12.313 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:12.313 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:12.313 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.313 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.313 [2024-11-26 19:48:03.236626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:12.578 "name": "raid_bdev1", 00:07:12.578 "aliases": [ 00:07:12.578 "36a58ec4-59db-4e90-b85e-819d90c15af7" 00:07:12.578 ], 00:07:12.578 "product_name": "Raid Volume", 00:07:12.578 "block_size": 512, 00:07:12.578 "num_blocks": 190464, 00:07:12.578 "uuid": "36a58ec4-59db-4e90-b85e-819d90c15af7", 00:07:12.578 "assigned_rate_limits": { 00:07:12.578 "rw_ios_per_sec": 0, 00:07:12.578 "rw_mbytes_per_sec": 0, 00:07:12.578 "r_mbytes_per_sec": 0, 00:07:12.578 "w_mbytes_per_sec": 0 00:07:12.578 }, 00:07:12.578 "claimed": false, 00:07:12.578 "zoned": false, 00:07:12.578 "supported_io_types": { 00:07:12.578 "read": true, 00:07:12.578 "write": true, 00:07:12.578 "unmap": true, 00:07:12.578 "flush": true, 00:07:12.578 "reset": true, 00:07:12.578 "nvme_admin": false, 00:07:12.578 "nvme_io": false, 00:07:12.578 "nvme_io_md": false, 00:07:12.578 "write_zeroes": true, 00:07:12.578 "zcopy": false, 00:07:12.578 "get_zone_info": false, 00:07:12.578 "zone_management": false, 00:07:12.578 "zone_append": false, 00:07:12.578 "compare": false, 00:07:12.578 "compare_and_write": false, 00:07:12.578 "abort": false, 00:07:12.578 "seek_hole": false, 00:07:12.578 "seek_data": false, 00:07:12.578 "copy": false, 00:07:12.578 "nvme_iov_md": false 00:07:12.578 }, 00:07:12.578 "memory_domains": [ 00:07:12.578 { 00:07:12.578 "dma_device_id": "system", 00:07:12.578 "dma_device_type": 1 00:07:12.578 }, 00:07:12.578 { 00:07:12.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.578 "dma_device_type": 2 00:07:12.578 }, 00:07:12.578 { 00:07:12.578 "dma_device_id": "system", 00:07:12.578 "dma_device_type": 1 00:07:12.578 }, 00:07:12.578 { 00:07:12.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.578 "dma_device_type": 2 00:07:12.578 }, 00:07:12.578 { 00:07:12.578 "dma_device_id": "system", 00:07:12.578 "dma_device_type": 1 00:07:12.578 }, 00:07:12.578 { 00:07:12.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.578 "dma_device_type": 2 00:07:12.578 } 00:07:12.578 ], 00:07:12.578 "driver_specific": { 00:07:12.578 "raid": { 00:07:12.578 "uuid": "36a58ec4-59db-4e90-b85e-819d90c15af7", 00:07:12.578 "strip_size_kb": 64, 00:07:12.578 "state": "online", 00:07:12.578 "raid_level": "concat", 00:07:12.578 "superblock": true, 00:07:12.578 "num_base_bdevs": 3, 00:07:12.578 "num_base_bdevs_discovered": 3, 00:07:12.578 "num_base_bdevs_operational": 3, 00:07:12.578 "base_bdevs_list": [ 00:07:12.578 { 00:07:12.578 "name": "pt1", 00:07:12.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:12.578 "is_configured": true, 00:07:12.578 "data_offset": 2048, 00:07:12.578 "data_size": 63488 00:07:12.578 }, 00:07:12.578 { 00:07:12.578 "name": "pt2", 00:07:12.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:12.578 "is_configured": true, 00:07:12.578 "data_offset": 2048, 00:07:12.578 "data_size": 63488 00:07:12.578 }, 00:07:12.578 { 00:07:12.578 "name": "pt3", 00:07:12.578 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:12.578 "is_configured": true, 00:07:12.578 "data_offset": 2048, 00:07:12.578 "data_size": 63488 00:07:12.578 } 00:07:12.578 ] 00:07:12.578 } 00:07:12.578 } 00:07:12.578 }' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:12.578 pt2 00:07:12.578 pt3' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:12.578 [2024-11-26 19:48:03.444638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 36a58ec4-59db-4e90-b85e-819d90c15af7 '!=' 36a58ec4-59db-4e90-b85e-819d90c15af7 ']' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65289 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65289 ']' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65289 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65289 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.578 killing process with pid 65289 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65289' 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65289 00:07:12.578 [2024-11-26 19:48:03.496804] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.578 19:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65289 00:07:12.579 [2024-11-26 19:48:03.496919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.579 [2024-11-26 19:48:03.496994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.579 [2024-11-26 19:48:03.497007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:12.867 [2024-11-26 19:48:03.698519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.799 19:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:13.799 00:07:13.799 real 0m4.101s 00:07:13.799 user 0m5.854s 00:07:13.799 sys 0m0.672s 00:07:13.800 19:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.800 19:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.800 ************************************ 00:07:13.800 END TEST raid_superblock_test 00:07:13.800 ************************************ 00:07:13.800 19:48:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:07:13.800 19:48:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:13.800 19:48:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.800 19:48:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.800 ************************************ 00:07:13.800 START TEST raid_read_error_test 00:07:13.800 ************************************ 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:13.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6V7ohfG0z3 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65531 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65531 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65531 ']' 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.800 19:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.800 [2024-11-26 19:48:04.596065] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:13.800 [2024-11-26 19:48:04.596214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65531 ] 00:07:14.057 [2024-11-26 19:48:04.755315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.057 [2024-11-26 19:48:04.857262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.314 [2024-11-26 19:48:04.993906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.315 [2024-11-26 19:48:04.993939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.573 BaseBdev1_malloc 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.573 true 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.573 [2024-11-26 19:48:05.484076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:14.573 [2024-11-26 19:48:05.484150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.573 [2024-11-26 19:48:05.484173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:14.573 [2024-11-26 19:48:05.484184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.573 [2024-11-26 19:48:05.486294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.573 [2024-11-26 19:48:05.486338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:14.573 BaseBdev1 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.573 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.832 BaseBdev2_malloc 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.832 true 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.832 [2024-11-26 19:48:05.530365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:14.832 [2024-11-26 19:48:05.530585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.832 [2024-11-26 19:48:05.530610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:14.832 [2024-11-26 19:48:05.530619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.832 [2024-11-26 19:48:05.532679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.832 [2024-11-26 19:48:05.532715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:14.832 BaseBdev2 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.832 BaseBdev3_malloc 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.832 true 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.832 [2024-11-26 19:48:05.591911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:14.832 [2024-11-26 19:48:05.591979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.832 [2024-11-26 19:48:05.591997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:14.832 [2024-11-26 19:48:05.592008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.832 [2024-11-26 19:48:05.593987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.832 [2024-11-26 19:48:05.594157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:14.832 BaseBdev3 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.832 [2024-11-26 19:48:05.600002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.832 [2024-11-26 19:48:05.601820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:14.832 [2024-11-26 19:48:05.601957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:14.832 [2024-11-26 19:48:05.602199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:14.832 [2024-11-26 19:48:05.602259] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:14.832 [2024-11-26 19:48:05.602554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:07:14.832 [2024-11-26 19:48:05.602714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:14.832 [2024-11-26 19:48:05.602740] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:14.832 [2024-11-26 19:48:05.602965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.832 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.833 "name": "raid_bdev1", 00:07:14.833 "uuid": "bb8b5903-54ab-4b35-8c1c-94b8a2ca28d0", 00:07:14.833 "strip_size_kb": 64, 00:07:14.833 "state": "online", 00:07:14.833 "raid_level": "concat", 00:07:14.833 "superblock": true, 00:07:14.833 "num_base_bdevs": 3, 00:07:14.833 "num_base_bdevs_discovered": 3, 00:07:14.833 "num_base_bdevs_operational": 3, 00:07:14.833 "base_bdevs_list": [ 00:07:14.833 { 00:07:14.833 "name": "BaseBdev1", 00:07:14.833 "uuid": "82877ee8-022a-51a0-9fba-3d53c818e6fc", 00:07:14.833 "is_configured": true, 00:07:14.833 "data_offset": 2048, 00:07:14.833 "data_size": 63488 00:07:14.833 }, 00:07:14.833 { 00:07:14.833 "name": "BaseBdev2", 00:07:14.833 "uuid": "e4ca6af1-4c1d-599b-91a4-55d88ed8e7dc", 00:07:14.833 "is_configured": true, 00:07:14.833 "data_offset": 2048, 00:07:14.833 "data_size": 63488 00:07:14.833 }, 00:07:14.833 { 00:07:14.833 "name": "BaseBdev3", 00:07:14.833 "uuid": "32b3b2c4-d4fd-5b16-9d17-9623decc261d", 00:07:14.833 "is_configured": true, 00:07:14.833 "data_offset": 2048, 00:07:14.833 "data_size": 63488 00:07:14.833 } 00:07:14.833 ] 00:07:14.833 }' 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.833 19:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.091 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:15.091 19:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:15.091 [2024-11-26 19:48:06.020953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.025 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.284 19:48:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.284 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.284 "name": "raid_bdev1", 00:07:16.284 "uuid": "bb8b5903-54ab-4b35-8c1c-94b8a2ca28d0", 00:07:16.284 "strip_size_kb": 64, 00:07:16.284 "state": "online", 00:07:16.284 "raid_level": "concat", 00:07:16.284 "superblock": true, 00:07:16.284 "num_base_bdevs": 3, 00:07:16.284 "num_base_bdevs_discovered": 3, 00:07:16.284 "num_base_bdevs_operational": 3, 00:07:16.284 "base_bdevs_list": [ 00:07:16.284 { 00:07:16.284 "name": "BaseBdev1", 00:07:16.284 "uuid": "82877ee8-022a-51a0-9fba-3d53c818e6fc", 00:07:16.284 "is_configured": true, 00:07:16.284 "data_offset": 2048, 00:07:16.284 "data_size": 63488 00:07:16.284 }, 00:07:16.284 { 00:07:16.284 "name": "BaseBdev2", 00:07:16.284 "uuid": "e4ca6af1-4c1d-599b-91a4-55d88ed8e7dc", 00:07:16.284 "is_configured": true, 00:07:16.284 "data_offset": 2048, 00:07:16.284 "data_size": 63488 00:07:16.284 }, 00:07:16.284 { 00:07:16.284 "name": "BaseBdev3", 00:07:16.284 "uuid": "32b3b2c4-d4fd-5b16-9d17-9623decc261d", 00:07:16.284 "is_configured": true, 00:07:16.284 "data_offset": 2048, 00:07:16.284 "data_size": 63488 00:07:16.284 } 00:07:16.284 ] 00:07:16.284 }' 00:07:16.284 19:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.284 19:48:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.542 [2024-11-26 19:48:07.266530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:16.542 [2024-11-26 19:48:07.266706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.542 [2024-11-26 19:48:07.269262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.542 [2024-11-26 19:48:07.269450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.542 [2024-11-26 19:48:07.269550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.542 [2024-11-26 19:48:07.269616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:16.542 { 00:07:16.542 "results": [ 00:07:16.542 { 00:07:16.542 "job": "raid_bdev1", 00:07:16.542 "core_mask": "0x1", 00:07:16.542 "workload": "randrw", 00:07:16.542 "percentage": 50, 00:07:16.542 "status": "finished", 00:07:16.542 "queue_depth": 1, 00:07:16.542 "io_size": 131072, 00:07:16.542 "runtime": 1.243893, 00:07:16.542 "iops": 16257.829250586667, 00:07:16.542 "mibps": 2032.2286563233333, 00:07:16.542 "io_failed": 1, 00:07:16.542 "io_timeout": 0, 00:07:16.542 "avg_latency_us": 84.86068159688413, 00:07:16.542 "min_latency_us": 25.993846153846153, 00:07:16.542 "max_latency_us": 1380.0369230769231 00:07:16.542 } 00:07:16.542 ], 00:07:16.542 "core_count": 1 00:07:16.542 } 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65531 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65531 ']' 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65531 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65531 00:07:16.542 killing process with pid 65531 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65531' 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65531 00:07:16.542 [2024-11-26 19:48:07.300086] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.542 19:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65531 00:07:16.542 [2024-11-26 19:48:07.424479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.475 19:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:17.475 19:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:17.475 19:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6V7ohfG0z3 00:07:17.475 19:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:07:17.475 19:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:17.475 19:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:17.475 19:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:17.475 19:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:07:17.475 00:07:17.475 real 0m3.581s 00:07:17.475 user 0m4.233s 00:07:17.475 sys 0m0.439s 00:07:17.475 19:48:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.475 19:48:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.475 ************************************ 00:07:17.475 END TEST raid_read_error_test 00:07:17.475 ************************************ 00:07:17.475 19:48:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:07:17.475 19:48:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:17.475 19:48:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.475 19:48:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.475 ************************************ 00:07:17.475 START TEST raid_write_error_test 00:07:17.475 ************************************ 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pms0gbTYvY 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65664 00:07:17.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65664 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65664 ']' 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.475 19:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.476 19:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.476 [2024-11-26 19:48:08.220240] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:17.476 [2024-11-26 19:48:08.220403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65664 ] 00:07:17.476 [2024-11-26 19:48:08.382090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.733 [2024-11-26 19:48:08.503039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.733 [2024-11-26 19:48:08.651754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.733 [2024-11-26 19:48:08.651831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.297 BaseBdev1_malloc 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.297 true 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.297 [2024-11-26 19:48:09.116073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:18.297 [2024-11-26 19:48:09.116142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.297 [2024-11-26 19:48:09.116163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:18.297 [2024-11-26 19:48:09.116176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.297 [2024-11-26 19:48:09.118502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.297 [2024-11-26 19:48:09.118540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:18.297 BaseBdev1 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.297 BaseBdev2_malloc 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.297 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.298 true 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.298 [2024-11-26 19:48:09.166470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:18.298 [2024-11-26 19:48:09.166688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.298 [2024-11-26 19:48:09.166713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:18.298 [2024-11-26 19:48:09.166724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.298 [2024-11-26 19:48:09.169094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.298 [2024-11-26 19:48:09.169133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:18.298 BaseBdev2 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.298 BaseBdev3_malloc 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.298 true 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.298 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.298 [2024-11-26 19:48:09.232399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:18.555 [2024-11-26 19:48:09.232588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.555 [2024-11-26 19:48:09.232616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:18.555 [2024-11-26 19:48:09.232628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.555 [2024-11-26 19:48:09.235024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.555 [2024-11-26 19:48:09.235062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:18.555 BaseBdev3 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.555 [2024-11-26 19:48:09.240496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:18.555 [2024-11-26 19:48:09.242490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:18.555 [2024-11-26 19:48:09.242573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:18.555 [2024-11-26 19:48:09.242790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:18.555 [2024-11-26 19:48:09.242806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:18.555 [2024-11-26 19:48:09.243113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:07:18.555 [2024-11-26 19:48:09.243277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:18.555 [2024-11-26 19:48:09.243296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:18.555 [2024-11-26 19:48:09.243470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.555 "name": "raid_bdev1", 00:07:18.555 "uuid": "1e918d80-4d2e-44de-96ad-0589087a7a27", 00:07:18.555 "strip_size_kb": 64, 00:07:18.555 "state": "online", 00:07:18.555 "raid_level": "concat", 00:07:18.555 "superblock": true, 00:07:18.555 "num_base_bdevs": 3, 00:07:18.555 "num_base_bdevs_discovered": 3, 00:07:18.555 "num_base_bdevs_operational": 3, 00:07:18.555 "base_bdevs_list": [ 00:07:18.555 { 00:07:18.555 "name": "BaseBdev1", 00:07:18.555 "uuid": "1aa48bf0-0049-5357-9513-aa2697619695", 00:07:18.555 "is_configured": true, 00:07:18.555 "data_offset": 2048, 00:07:18.555 "data_size": 63488 00:07:18.555 }, 00:07:18.555 { 00:07:18.555 "name": "BaseBdev2", 00:07:18.555 "uuid": "38ed1531-4704-51b9-a694-d5002e0a5d06", 00:07:18.555 "is_configured": true, 00:07:18.555 "data_offset": 2048, 00:07:18.555 "data_size": 63488 00:07:18.555 }, 00:07:18.555 { 00:07:18.555 "name": "BaseBdev3", 00:07:18.555 "uuid": "2886d533-457c-55c1-8304-c0a257e16fc3", 00:07:18.555 "is_configured": true, 00:07:18.555 "data_offset": 2048, 00:07:18.555 "data_size": 63488 00:07:18.555 } 00:07:18.555 ] 00:07:18.555 }' 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.555 19:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.813 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:18.813 19:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:18.813 [2024-11-26 19:48:09.649608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.801 "name": "raid_bdev1", 00:07:19.801 "uuid": "1e918d80-4d2e-44de-96ad-0589087a7a27", 00:07:19.801 "strip_size_kb": 64, 00:07:19.801 "state": "online", 00:07:19.801 "raid_level": "concat", 00:07:19.801 "superblock": true, 00:07:19.801 "num_base_bdevs": 3, 00:07:19.801 "num_base_bdevs_discovered": 3, 00:07:19.801 "num_base_bdevs_operational": 3, 00:07:19.801 "base_bdevs_list": [ 00:07:19.801 { 00:07:19.801 "name": "BaseBdev1", 00:07:19.801 "uuid": "1aa48bf0-0049-5357-9513-aa2697619695", 00:07:19.801 "is_configured": true, 00:07:19.801 "data_offset": 2048, 00:07:19.801 "data_size": 63488 00:07:19.801 }, 00:07:19.801 { 00:07:19.801 "name": "BaseBdev2", 00:07:19.801 "uuid": "38ed1531-4704-51b9-a694-d5002e0a5d06", 00:07:19.801 "is_configured": true, 00:07:19.801 "data_offset": 2048, 00:07:19.801 "data_size": 63488 00:07:19.801 }, 00:07:19.801 { 00:07:19.801 "name": "BaseBdev3", 00:07:19.801 "uuid": "2886d533-457c-55c1-8304-c0a257e16fc3", 00:07:19.801 "is_configured": true, 00:07:19.801 "data_offset": 2048, 00:07:19.801 "data_size": 63488 00:07:19.801 } 00:07:19.801 ] 00:07:19.801 }' 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.801 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.060 [2024-11-26 19:48:10.899743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:20.060 [2024-11-26 19:48:10.899783] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.060 [2024-11-26 19:48:10.902848] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.060 [2024-11-26 19:48:10.902905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.060 [2024-11-26 19:48:10.902954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.060 [2024-11-26 19:48:10.902964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.060 { 00:07:20.060 "results": [ 00:07:20.060 { 00:07:20.060 "job": "raid_bdev1", 00:07:20.060 "core_mask": "0x1", 00:07:20.060 "workload": "randrw", 00:07:20.060 "percentage": 50, 00:07:20.060 "status": "finished", 00:07:20.060 "queue_depth": 1, 00:07:20.060 "io_size": 131072, 00:07:20.060 "runtime": 1.248083, 00:07:20.060 "iops": 13969.423507891703, 00:07:20.060 "mibps": 1746.1779384864628, 00:07:20.060 "io_failed": 1, 00:07:20.060 "io_timeout": 0, 00:07:20.060 "avg_latency_us": 98.26517444015035, 00:07:20.060 "min_latency_us": 33.28, 00:07:20.060 "max_latency_us": 1701.4153846153847 00:07:20.060 } 00:07:20.060 ], 00:07:20.060 "core_count": 1 00:07:20.060 } 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65664 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65664 ']' 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65664 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65664 00:07:20.060 killing process with pid 65664 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65664' 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65664 00:07:20.060 19:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65664 00:07:20.060 [2024-11-26 19:48:10.931603] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.318 [2024-11-26 19:48:11.082395] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.251 19:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:21.251 19:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pms0gbTYvY 00:07:21.251 19:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:21.251 ************************************ 00:07:21.251 END TEST raid_write_error_test 00:07:21.251 ************************************ 00:07:21.251 19:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:07:21.251 19:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:21.251 19:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.251 19:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:21.251 19:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:07:21.251 00:07:21.251 real 0m3.768s 00:07:21.251 user 0m4.390s 00:07:21.251 sys 0m0.479s 00:07:21.251 19:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.251 19:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.251 19:48:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:21.251 19:48:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:07:21.251 19:48:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:21.251 19:48:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.251 19:48:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.251 ************************************ 00:07:21.251 START TEST raid_state_function_test 00:07:21.251 ************************************ 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65798 00:07:21.251 Process raid pid: 65798 00:07:21.251 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65798' 00:07:21.252 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.252 19:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65798 00:07:21.252 19:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65798 ']' 00:07:21.252 19:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.252 19:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.252 19:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.252 19:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.252 19:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.252 [2024-11-26 19:48:12.029056] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:21.252 [2024-11-26 19:48:12.029282] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.510 [2024-11-26 19:48:12.187080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.510 [2024-11-26 19:48:12.305650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.767 [2024-11-26 19:48:12.454914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.767 [2024-11-26 19:48:12.454975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.025 19:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.025 19:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:22.025 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:22.025 19:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.025 19:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.025 [2024-11-26 19:48:12.887167] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.025 [2024-11-26 19:48:12.887232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.025 [2024-11-26 19:48:12.887243] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.025 [2024-11-26 19:48:12.887253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.025 [2024-11-26 19:48:12.887260] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:22.026 [2024-11-26 19:48:12.887269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.026 "name": "Existed_Raid", 00:07:22.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.026 "strip_size_kb": 0, 00:07:22.026 "state": "configuring", 00:07:22.026 "raid_level": "raid1", 00:07:22.026 "superblock": false, 00:07:22.026 "num_base_bdevs": 3, 00:07:22.026 "num_base_bdevs_discovered": 0, 00:07:22.026 "num_base_bdevs_operational": 3, 00:07:22.026 "base_bdevs_list": [ 00:07:22.026 { 00:07:22.026 "name": "BaseBdev1", 00:07:22.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.026 "is_configured": false, 00:07:22.026 "data_offset": 0, 00:07:22.026 "data_size": 0 00:07:22.026 }, 00:07:22.026 { 00:07:22.026 "name": "BaseBdev2", 00:07:22.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.026 "is_configured": false, 00:07:22.026 "data_offset": 0, 00:07:22.026 "data_size": 0 00:07:22.026 }, 00:07:22.026 { 00:07:22.026 "name": "BaseBdev3", 00:07:22.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.026 "is_configured": false, 00:07:22.026 "data_offset": 0, 00:07:22.026 "data_size": 0 00:07:22.026 } 00:07:22.026 ] 00:07:22.026 }' 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.026 19:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.284 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.284 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.284 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.284 [2024-11-26 19:48:13.215191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.284 [2024-11-26 19:48:13.215238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.542 [2024-11-26 19:48:13.223182] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.542 [2024-11-26 19:48:13.223234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.542 [2024-11-26 19:48:13.223243] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.542 [2024-11-26 19:48:13.223252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.542 [2024-11-26 19:48:13.223258] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:22.542 [2024-11-26 19:48:13.223267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.542 [2024-11-26 19:48:13.258126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.542 BaseBdev1 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.542 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.542 [ 00:07:22.542 { 00:07:22.542 "name": "BaseBdev1", 00:07:22.542 "aliases": [ 00:07:22.542 "4cb959f4-4d7b-44c0-b5ae-90fee074f4b5" 00:07:22.542 ], 00:07:22.542 "product_name": "Malloc disk", 00:07:22.542 "block_size": 512, 00:07:22.542 "num_blocks": 65536, 00:07:22.542 "uuid": "4cb959f4-4d7b-44c0-b5ae-90fee074f4b5", 00:07:22.542 "assigned_rate_limits": { 00:07:22.542 "rw_ios_per_sec": 0, 00:07:22.542 "rw_mbytes_per_sec": 0, 00:07:22.542 "r_mbytes_per_sec": 0, 00:07:22.542 "w_mbytes_per_sec": 0 00:07:22.542 }, 00:07:22.542 "claimed": true, 00:07:22.542 "claim_type": "exclusive_write", 00:07:22.542 "zoned": false, 00:07:22.542 "supported_io_types": { 00:07:22.542 "read": true, 00:07:22.542 "write": true, 00:07:22.542 "unmap": true, 00:07:22.542 "flush": true, 00:07:22.542 "reset": true, 00:07:22.542 "nvme_admin": false, 00:07:22.542 "nvme_io": false, 00:07:22.543 "nvme_io_md": false, 00:07:22.543 "write_zeroes": true, 00:07:22.543 "zcopy": true, 00:07:22.543 "get_zone_info": false, 00:07:22.543 "zone_management": false, 00:07:22.543 "zone_append": false, 00:07:22.543 "compare": false, 00:07:22.543 "compare_and_write": false, 00:07:22.543 "abort": true, 00:07:22.543 "seek_hole": false, 00:07:22.543 "seek_data": false, 00:07:22.543 "copy": true, 00:07:22.543 "nvme_iov_md": false 00:07:22.543 }, 00:07:22.543 "memory_domains": [ 00:07:22.543 { 00:07:22.543 "dma_device_id": "system", 00:07:22.543 "dma_device_type": 1 00:07:22.543 }, 00:07:22.543 { 00:07:22.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.543 "dma_device_type": 2 00:07:22.543 } 00:07:22.543 ], 00:07:22.543 "driver_specific": {} 00:07:22.543 } 00:07:22.543 ] 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.543 "name": "Existed_Raid", 00:07:22.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.543 "strip_size_kb": 0, 00:07:22.543 "state": "configuring", 00:07:22.543 "raid_level": "raid1", 00:07:22.543 "superblock": false, 00:07:22.543 "num_base_bdevs": 3, 00:07:22.543 "num_base_bdevs_discovered": 1, 00:07:22.543 "num_base_bdevs_operational": 3, 00:07:22.543 "base_bdevs_list": [ 00:07:22.543 { 00:07:22.543 "name": "BaseBdev1", 00:07:22.543 "uuid": "4cb959f4-4d7b-44c0-b5ae-90fee074f4b5", 00:07:22.543 "is_configured": true, 00:07:22.543 "data_offset": 0, 00:07:22.543 "data_size": 65536 00:07:22.543 }, 00:07:22.543 { 00:07:22.543 "name": "BaseBdev2", 00:07:22.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.543 "is_configured": false, 00:07:22.543 "data_offset": 0, 00:07:22.543 "data_size": 0 00:07:22.543 }, 00:07:22.543 { 00:07:22.543 "name": "BaseBdev3", 00:07:22.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.543 "is_configured": false, 00:07:22.543 "data_offset": 0, 00:07:22.543 "data_size": 0 00:07:22.543 } 00:07:22.543 ] 00:07:22.543 }' 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.543 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.801 [2024-11-26 19:48:13.574250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.801 [2024-11-26 19:48:13.574314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.801 [2024-11-26 19:48:13.582292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.801 [2024-11-26 19:48:13.584299] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.801 [2024-11-26 19:48:13.584358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.801 [2024-11-26 19:48:13.584370] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:22.801 [2024-11-26 19:48:13.584379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.801 "name": "Existed_Raid", 00:07:22.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.801 "strip_size_kb": 0, 00:07:22.801 "state": "configuring", 00:07:22.801 "raid_level": "raid1", 00:07:22.801 "superblock": false, 00:07:22.801 "num_base_bdevs": 3, 00:07:22.801 "num_base_bdevs_discovered": 1, 00:07:22.801 "num_base_bdevs_operational": 3, 00:07:22.801 "base_bdevs_list": [ 00:07:22.801 { 00:07:22.801 "name": "BaseBdev1", 00:07:22.801 "uuid": "4cb959f4-4d7b-44c0-b5ae-90fee074f4b5", 00:07:22.801 "is_configured": true, 00:07:22.801 "data_offset": 0, 00:07:22.801 "data_size": 65536 00:07:22.801 }, 00:07:22.801 { 00:07:22.801 "name": "BaseBdev2", 00:07:22.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.801 "is_configured": false, 00:07:22.801 "data_offset": 0, 00:07:22.801 "data_size": 0 00:07:22.801 }, 00:07:22.801 { 00:07:22.801 "name": "BaseBdev3", 00:07:22.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.801 "is_configured": false, 00:07:22.801 "data_offset": 0, 00:07:22.801 "data_size": 0 00:07:22.801 } 00:07:22.801 ] 00:07:22.801 }' 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.801 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.060 [2024-11-26 19:48:13.923065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.060 BaseBdev2 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.060 [ 00:07:23.060 { 00:07:23.060 "name": "BaseBdev2", 00:07:23.060 "aliases": [ 00:07:23.060 "702b4586-5c17-416d-bc17-9ce002229926" 00:07:23.060 ], 00:07:23.060 "product_name": "Malloc disk", 00:07:23.060 "block_size": 512, 00:07:23.060 "num_blocks": 65536, 00:07:23.060 "uuid": "702b4586-5c17-416d-bc17-9ce002229926", 00:07:23.060 "assigned_rate_limits": { 00:07:23.060 "rw_ios_per_sec": 0, 00:07:23.060 "rw_mbytes_per_sec": 0, 00:07:23.060 "r_mbytes_per_sec": 0, 00:07:23.060 "w_mbytes_per_sec": 0 00:07:23.060 }, 00:07:23.060 "claimed": true, 00:07:23.060 "claim_type": "exclusive_write", 00:07:23.060 "zoned": false, 00:07:23.060 "supported_io_types": { 00:07:23.060 "read": true, 00:07:23.060 "write": true, 00:07:23.060 "unmap": true, 00:07:23.060 "flush": true, 00:07:23.060 "reset": true, 00:07:23.060 "nvme_admin": false, 00:07:23.060 "nvme_io": false, 00:07:23.060 "nvme_io_md": false, 00:07:23.060 "write_zeroes": true, 00:07:23.060 "zcopy": true, 00:07:23.060 "get_zone_info": false, 00:07:23.060 "zone_management": false, 00:07:23.060 "zone_append": false, 00:07:23.060 "compare": false, 00:07:23.060 "compare_and_write": false, 00:07:23.060 "abort": true, 00:07:23.060 "seek_hole": false, 00:07:23.060 "seek_data": false, 00:07:23.060 "copy": true, 00:07:23.060 "nvme_iov_md": false 00:07:23.060 }, 00:07:23.060 "memory_domains": [ 00:07:23.060 { 00:07:23.060 "dma_device_id": "system", 00:07:23.060 "dma_device_type": 1 00:07:23.060 }, 00:07:23.060 { 00:07:23.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.060 "dma_device_type": 2 00:07:23.060 } 00:07:23.060 ], 00:07:23.060 "driver_specific": {} 00:07:23.060 } 00:07:23.060 ] 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.060 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.060 "name": "Existed_Raid", 00:07:23.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.060 "strip_size_kb": 0, 00:07:23.060 "state": "configuring", 00:07:23.060 "raid_level": "raid1", 00:07:23.060 "superblock": false, 00:07:23.060 "num_base_bdevs": 3, 00:07:23.060 "num_base_bdevs_discovered": 2, 00:07:23.060 "num_base_bdevs_operational": 3, 00:07:23.060 "base_bdevs_list": [ 00:07:23.060 { 00:07:23.060 "name": "BaseBdev1", 00:07:23.060 "uuid": "4cb959f4-4d7b-44c0-b5ae-90fee074f4b5", 00:07:23.061 "is_configured": true, 00:07:23.061 "data_offset": 0, 00:07:23.061 "data_size": 65536 00:07:23.061 }, 00:07:23.061 { 00:07:23.061 "name": "BaseBdev2", 00:07:23.061 "uuid": "702b4586-5c17-416d-bc17-9ce002229926", 00:07:23.061 "is_configured": true, 00:07:23.061 "data_offset": 0, 00:07:23.061 "data_size": 65536 00:07:23.061 }, 00:07:23.061 { 00:07:23.061 "name": "BaseBdev3", 00:07:23.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.061 "is_configured": false, 00:07:23.061 "data_offset": 0, 00:07:23.061 "data_size": 0 00:07:23.061 } 00:07:23.061 ] 00:07:23.061 }' 00:07:23.061 19:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.061 19:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.626 [2024-11-26 19:48:14.316702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:23.626 [2024-11-26 19:48:14.316760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:23.626 [2024-11-26 19:48:14.316773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:23.626 [2024-11-26 19:48:14.317021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:23.626 [2024-11-26 19:48:14.317174] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:23.626 [2024-11-26 19:48:14.317189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:23.626 [2024-11-26 19:48:14.317451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.626 BaseBdev3 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.626 [ 00:07:23.626 { 00:07:23.626 "name": "BaseBdev3", 00:07:23.626 "aliases": [ 00:07:23.626 "b5d96c34-2f34-4568-ba50-f5a6ac0dbed3" 00:07:23.626 ], 00:07:23.626 "product_name": "Malloc disk", 00:07:23.626 "block_size": 512, 00:07:23.626 "num_blocks": 65536, 00:07:23.626 "uuid": "b5d96c34-2f34-4568-ba50-f5a6ac0dbed3", 00:07:23.626 "assigned_rate_limits": { 00:07:23.626 "rw_ios_per_sec": 0, 00:07:23.626 "rw_mbytes_per_sec": 0, 00:07:23.626 "r_mbytes_per_sec": 0, 00:07:23.626 "w_mbytes_per_sec": 0 00:07:23.626 }, 00:07:23.626 "claimed": true, 00:07:23.626 "claim_type": "exclusive_write", 00:07:23.626 "zoned": false, 00:07:23.626 "supported_io_types": { 00:07:23.626 "read": true, 00:07:23.626 "write": true, 00:07:23.626 "unmap": true, 00:07:23.626 "flush": true, 00:07:23.626 "reset": true, 00:07:23.626 "nvme_admin": false, 00:07:23.626 "nvme_io": false, 00:07:23.626 "nvme_io_md": false, 00:07:23.626 "write_zeroes": true, 00:07:23.626 "zcopy": true, 00:07:23.626 "get_zone_info": false, 00:07:23.626 "zone_management": false, 00:07:23.626 "zone_append": false, 00:07:23.626 "compare": false, 00:07:23.626 "compare_and_write": false, 00:07:23.626 "abort": true, 00:07:23.626 "seek_hole": false, 00:07:23.626 "seek_data": false, 00:07:23.626 "copy": true, 00:07:23.626 "nvme_iov_md": false 00:07:23.626 }, 00:07:23.626 "memory_domains": [ 00:07:23.626 { 00:07:23.626 "dma_device_id": "system", 00:07:23.626 "dma_device_type": 1 00:07:23.626 }, 00:07:23.626 { 00:07:23.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.626 "dma_device_type": 2 00:07:23.626 } 00:07:23.626 ], 00:07:23.626 "driver_specific": {} 00:07:23.626 } 00:07:23.626 ] 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.626 "name": "Existed_Raid", 00:07:23.626 "uuid": "dd553419-8bc9-4c95-a267-7b385ecee398", 00:07:23.626 "strip_size_kb": 0, 00:07:23.626 "state": "online", 00:07:23.626 "raid_level": "raid1", 00:07:23.626 "superblock": false, 00:07:23.626 "num_base_bdevs": 3, 00:07:23.626 "num_base_bdevs_discovered": 3, 00:07:23.626 "num_base_bdevs_operational": 3, 00:07:23.626 "base_bdevs_list": [ 00:07:23.626 { 00:07:23.626 "name": "BaseBdev1", 00:07:23.626 "uuid": "4cb959f4-4d7b-44c0-b5ae-90fee074f4b5", 00:07:23.626 "is_configured": true, 00:07:23.626 "data_offset": 0, 00:07:23.626 "data_size": 65536 00:07:23.626 }, 00:07:23.626 { 00:07:23.626 "name": "BaseBdev2", 00:07:23.626 "uuid": "702b4586-5c17-416d-bc17-9ce002229926", 00:07:23.626 "is_configured": true, 00:07:23.626 "data_offset": 0, 00:07:23.626 "data_size": 65536 00:07:23.626 }, 00:07:23.626 { 00:07:23.626 "name": "BaseBdev3", 00:07:23.626 "uuid": "b5d96c34-2f34-4568-ba50-f5a6ac0dbed3", 00:07:23.626 "is_configured": true, 00:07:23.626 "data_offset": 0, 00:07:23.626 "data_size": 65536 00:07:23.626 } 00:07:23.626 ] 00:07:23.626 }' 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.626 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:23.885 [2024-11-26 19:48:14.665113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:23.885 "name": "Existed_Raid", 00:07:23.885 "aliases": [ 00:07:23.885 "dd553419-8bc9-4c95-a267-7b385ecee398" 00:07:23.885 ], 00:07:23.885 "product_name": "Raid Volume", 00:07:23.885 "block_size": 512, 00:07:23.885 "num_blocks": 65536, 00:07:23.885 "uuid": "dd553419-8bc9-4c95-a267-7b385ecee398", 00:07:23.885 "assigned_rate_limits": { 00:07:23.885 "rw_ios_per_sec": 0, 00:07:23.885 "rw_mbytes_per_sec": 0, 00:07:23.885 "r_mbytes_per_sec": 0, 00:07:23.885 "w_mbytes_per_sec": 0 00:07:23.885 }, 00:07:23.885 "claimed": false, 00:07:23.885 "zoned": false, 00:07:23.885 "supported_io_types": { 00:07:23.885 "read": true, 00:07:23.885 "write": true, 00:07:23.885 "unmap": false, 00:07:23.885 "flush": false, 00:07:23.885 "reset": true, 00:07:23.885 "nvme_admin": false, 00:07:23.885 "nvme_io": false, 00:07:23.885 "nvme_io_md": false, 00:07:23.885 "write_zeroes": true, 00:07:23.885 "zcopy": false, 00:07:23.885 "get_zone_info": false, 00:07:23.885 "zone_management": false, 00:07:23.885 "zone_append": false, 00:07:23.885 "compare": false, 00:07:23.885 "compare_and_write": false, 00:07:23.885 "abort": false, 00:07:23.885 "seek_hole": false, 00:07:23.885 "seek_data": false, 00:07:23.885 "copy": false, 00:07:23.885 "nvme_iov_md": false 00:07:23.885 }, 00:07:23.885 "memory_domains": [ 00:07:23.885 { 00:07:23.885 "dma_device_id": "system", 00:07:23.885 "dma_device_type": 1 00:07:23.885 }, 00:07:23.885 { 00:07:23.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.885 "dma_device_type": 2 00:07:23.885 }, 00:07:23.885 { 00:07:23.885 "dma_device_id": "system", 00:07:23.885 "dma_device_type": 1 00:07:23.885 }, 00:07:23.885 { 00:07:23.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.885 "dma_device_type": 2 00:07:23.885 }, 00:07:23.885 { 00:07:23.885 "dma_device_id": "system", 00:07:23.885 "dma_device_type": 1 00:07:23.885 }, 00:07:23.885 { 00:07:23.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.885 "dma_device_type": 2 00:07:23.885 } 00:07:23.885 ], 00:07:23.885 "driver_specific": { 00:07:23.885 "raid": { 00:07:23.885 "uuid": "dd553419-8bc9-4c95-a267-7b385ecee398", 00:07:23.885 "strip_size_kb": 0, 00:07:23.885 "state": "online", 00:07:23.885 "raid_level": "raid1", 00:07:23.885 "superblock": false, 00:07:23.885 "num_base_bdevs": 3, 00:07:23.885 "num_base_bdevs_discovered": 3, 00:07:23.885 "num_base_bdevs_operational": 3, 00:07:23.885 "base_bdevs_list": [ 00:07:23.885 { 00:07:23.885 "name": "BaseBdev1", 00:07:23.885 "uuid": "4cb959f4-4d7b-44c0-b5ae-90fee074f4b5", 00:07:23.885 "is_configured": true, 00:07:23.885 "data_offset": 0, 00:07:23.885 "data_size": 65536 00:07:23.885 }, 00:07:23.885 { 00:07:23.885 "name": "BaseBdev2", 00:07:23.885 "uuid": "702b4586-5c17-416d-bc17-9ce002229926", 00:07:23.885 "is_configured": true, 00:07:23.885 "data_offset": 0, 00:07:23.885 "data_size": 65536 00:07:23.885 }, 00:07:23.885 { 00:07:23.885 "name": "BaseBdev3", 00:07:23.885 "uuid": "b5d96c34-2f34-4568-ba50-f5a6ac0dbed3", 00:07:23.885 "is_configured": true, 00:07:23.885 "data_offset": 0, 00:07:23.885 "data_size": 65536 00:07:23.885 } 00:07:23.885 ] 00:07:23.885 } 00:07:23.885 } 00:07:23.885 }' 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:23.885 BaseBdev2 00:07:23.885 BaseBdev3' 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.885 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.143 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.143 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.143 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.143 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:24.143 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.143 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.143 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.143 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.143 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.143 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.143 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.144 [2024-11-26 19:48:14.860910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.144 "name": "Existed_Raid", 00:07:24.144 "uuid": "dd553419-8bc9-4c95-a267-7b385ecee398", 00:07:24.144 "strip_size_kb": 0, 00:07:24.144 "state": "online", 00:07:24.144 "raid_level": "raid1", 00:07:24.144 "superblock": false, 00:07:24.144 "num_base_bdevs": 3, 00:07:24.144 "num_base_bdevs_discovered": 2, 00:07:24.144 "num_base_bdevs_operational": 2, 00:07:24.144 "base_bdevs_list": [ 00:07:24.144 { 00:07:24.144 "name": null, 00:07:24.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.144 "is_configured": false, 00:07:24.144 "data_offset": 0, 00:07:24.144 "data_size": 65536 00:07:24.144 }, 00:07:24.144 { 00:07:24.144 "name": "BaseBdev2", 00:07:24.144 "uuid": "702b4586-5c17-416d-bc17-9ce002229926", 00:07:24.144 "is_configured": true, 00:07:24.144 "data_offset": 0, 00:07:24.144 "data_size": 65536 00:07:24.144 }, 00:07:24.144 { 00:07:24.144 "name": "BaseBdev3", 00:07:24.144 "uuid": "b5d96c34-2f34-4568-ba50-f5a6ac0dbed3", 00:07:24.144 "is_configured": true, 00:07:24.144 "data_offset": 0, 00:07:24.144 "data_size": 65536 00:07:24.144 } 00:07:24.144 ] 00:07:24.144 }' 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.144 19:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.402 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:24.402 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:24.402 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.402 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:24.402 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.402 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.402 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.402 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:24.402 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:24.402 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:24.402 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.402 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.402 [2024-11-26 19:48:15.303036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.660 [2024-11-26 19:48:15.393415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:24.660 [2024-11-26 19:48:15.393521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.660 [2024-11-26 19:48:15.444045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.660 [2024-11-26 19:48:15.444104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.660 [2024-11-26 19:48:15.444115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.660 BaseBdev2 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.660 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.660 [ 00:07:24.660 { 00:07:24.660 "name": "BaseBdev2", 00:07:24.660 "aliases": [ 00:07:24.660 "586709f5-577f-449d-9da5-d3f35f14480c" 00:07:24.660 ], 00:07:24.661 "product_name": "Malloc disk", 00:07:24.661 "block_size": 512, 00:07:24.661 "num_blocks": 65536, 00:07:24.661 "uuid": "586709f5-577f-449d-9da5-d3f35f14480c", 00:07:24.661 "assigned_rate_limits": { 00:07:24.661 "rw_ios_per_sec": 0, 00:07:24.661 "rw_mbytes_per_sec": 0, 00:07:24.661 "r_mbytes_per_sec": 0, 00:07:24.661 "w_mbytes_per_sec": 0 00:07:24.661 }, 00:07:24.661 "claimed": false, 00:07:24.661 "zoned": false, 00:07:24.661 "supported_io_types": { 00:07:24.661 "read": true, 00:07:24.661 "write": true, 00:07:24.661 "unmap": true, 00:07:24.661 "flush": true, 00:07:24.661 "reset": true, 00:07:24.661 "nvme_admin": false, 00:07:24.661 "nvme_io": false, 00:07:24.661 "nvme_io_md": false, 00:07:24.661 "write_zeroes": true, 00:07:24.661 "zcopy": true, 00:07:24.661 "get_zone_info": false, 00:07:24.661 "zone_management": false, 00:07:24.661 "zone_append": false, 00:07:24.661 "compare": false, 00:07:24.661 "compare_and_write": false, 00:07:24.661 "abort": true, 00:07:24.661 "seek_hole": false, 00:07:24.661 "seek_data": false, 00:07:24.661 "copy": true, 00:07:24.661 "nvme_iov_md": false 00:07:24.661 }, 00:07:24.661 "memory_domains": [ 00:07:24.661 { 00:07:24.661 "dma_device_id": "system", 00:07:24.661 "dma_device_type": 1 00:07:24.661 }, 00:07:24.661 { 00:07:24.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.661 "dma_device_type": 2 00:07:24.661 } 00:07:24.661 ], 00:07:24.661 "driver_specific": {} 00:07:24.661 } 00:07:24.661 ] 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.661 BaseBdev3 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.661 [ 00:07:24.661 { 00:07:24.661 "name": "BaseBdev3", 00:07:24.661 "aliases": [ 00:07:24.661 "0c199554-9529-4907-8998-155d3a75ec37" 00:07:24.661 ], 00:07:24.661 "product_name": "Malloc disk", 00:07:24.661 "block_size": 512, 00:07:24.661 "num_blocks": 65536, 00:07:24.661 "uuid": "0c199554-9529-4907-8998-155d3a75ec37", 00:07:24.661 "assigned_rate_limits": { 00:07:24.661 "rw_ios_per_sec": 0, 00:07:24.661 "rw_mbytes_per_sec": 0, 00:07:24.661 "r_mbytes_per_sec": 0, 00:07:24.661 "w_mbytes_per_sec": 0 00:07:24.661 }, 00:07:24.661 "claimed": false, 00:07:24.661 "zoned": false, 00:07:24.661 "supported_io_types": { 00:07:24.661 "read": true, 00:07:24.661 "write": true, 00:07:24.661 "unmap": true, 00:07:24.661 "flush": true, 00:07:24.661 "reset": true, 00:07:24.661 "nvme_admin": false, 00:07:24.661 "nvme_io": false, 00:07:24.661 "nvme_io_md": false, 00:07:24.661 "write_zeroes": true, 00:07:24.661 "zcopy": true, 00:07:24.661 "get_zone_info": false, 00:07:24.661 "zone_management": false, 00:07:24.661 "zone_append": false, 00:07:24.661 "compare": false, 00:07:24.661 "compare_and_write": false, 00:07:24.661 "abort": true, 00:07:24.661 "seek_hole": false, 00:07:24.661 "seek_data": false, 00:07:24.661 "copy": true, 00:07:24.661 "nvme_iov_md": false 00:07:24.661 }, 00:07:24.661 "memory_domains": [ 00:07:24.661 { 00:07:24.661 "dma_device_id": "system", 00:07:24.661 "dma_device_type": 1 00:07:24.661 }, 00:07:24.661 { 00:07:24.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.661 "dma_device_type": 2 00:07:24.661 } 00:07:24.661 ], 00:07:24.661 "driver_specific": {} 00:07:24.661 } 00:07:24.661 ] 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.661 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.919 [2024-11-26 19:48:15.596860] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.919 [2024-11-26 19:48:15.597061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.919 [2024-11-26 19:48:15.597130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.919 [2024-11-26 19:48:15.598897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.919 "name": "Existed_Raid", 00:07:24.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.919 "strip_size_kb": 0, 00:07:24.919 "state": "configuring", 00:07:24.919 "raid_level": "raid1", 00:07:24.919 "superblock": false, 00:07:24.919 "num_base_bdevs": 3, 00:07:24.919 "num_base_bdevs_discovered": 2, 00:07:24.919 "num_base_bdevs_operational": 3, 00:07:24.919 "base_bdevs_list": [ 00:07:24.919 { 00:07:24.919 "name": "BaseBdev1", 00:07:24.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.919 "is_configured": false, 00:07:24.919 "data_offset": 0, 00:07:24.919 "data_size": 0 00:07:24.919 }, 00:07:24.919 { 00:07:24.919 "name": "BaseBdev2", 00:07:24.919 "uuid": "586709f5-577f-449d-9da5-d3f35f14480c", 00:07:24.919 "is_configured": true, 00:07:24.919 "data_offset": 0, 00:07:24.919 "data_size": 65536 00:07:24.919 }, 00:07:24.919 { 00:07:24.919 "name": "BaseBdev3", 00:07:24.919 "uuid": "0c199554-9529-4907-8998-155d3a75ec37", 00:07:24.919 "is_configured": true, 00:07:24.919 "data_offset": 0, 00:07:24.919 "data_size": 65536 00:07:24.919 } 00:07:24.919 ] 00:07:24.919 }' 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.919 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.178 [2024-11-26 19:48:15.968969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.178 19:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.178 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.178 "name": "Existed_Raid", 00:07:25.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.178 "strip_size_kb": 0, 00:07:25.178 "state": "configuring", 00:07:25.178 "raid_level": "raid1", 00:07:25.178 "superblock": false, 00:07:25.178 "num_base_bdevs": 3, 00:07:25.178 "num_base_bdevs_discovered": 1, 00:07:25.178 "num_base_bdevs_operational": 3, 00:07:25.178 "base_bdevs_list": [ 00:07:25.178 { 00:07:25.178 "name": "BaseBdev1", 00:07:25.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.178 "is_configured": false, 00:07:25.178 "data_offset": 0, 00:07:25.178 "data_size": 0 00:07:25.178 }, 00:07:25.178 { 00:07:25.178 "name": null, 00:07:25.178 "uuid": "586709f5-577f-449d-9da5-d3f35f14480c", 00:07:25.178 "is_configured": false, 00:07:25.178 "data_offset": 0, 00:07:25.178 "data_size": 65536 00:07:25.178 }, 00:07:25.178 { 00:07:25.178 "name": "BaseBdev3", 00:07:25.178 "uuid": "0c199554-9529-4907-8998-155d3a75ec37", 00:07:25.178 "is_configured": true, 00:07:25.178 "data_offset": 0, 00:07:25.178 "data_size": 65536 00:07:25.178 } 00:07:25.178 ] 00:07:25.178 }' 00:07:25.178 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.178 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.436 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:25.436 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.436 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.436 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.436 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.436 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:25.436 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:25.436 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.436 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.696 [2024-11-26 19:48:16.389479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.696 BaseBdev1 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.696 [ 00:07:25.696 { 00:07:25.696 "name": "BaseBdev1", 00:07:25.696 "aliases": [ 00:07:25.696 "f7a3d073-cd03-446d-950a-822eea275f3a" 00:07:25.696 ], 00:07:25.696 "product_name": "Malloc disk", 00:07:25.696 "block_size": 512, 00:07:25.696 "num_blocks": 65536, 00:07:25.696 "uuid": "f7a3d073-cd03-446d-950a-822eea275f3a", 00:07:25.696 "assigned_rate_limits": { 00:07:25.696 "rw_ios_per_sec": 0, 00:07:25.696 "rw_mbytes_per_sec": 0, 00:07:25.696 "r_mbytes_per_sec": 0, 00:07:25.696 "w_mbytes_per_sec": 0 00:07:25.696 }, 00:07:25.696 "claimed": true, 00:07:25.696 "claim_type": "exclusive_write", 00:07:25.696 "zoned": false, 00:07:25.696 "supported_io_types": { 00:07:25.696 "read": true, 00:07:25.696 "write": true, 00:07:25.696 "unmap": true, 00:07:25.696 "flush": true, 00:07:25.696 "reset": true, 00:07:25.696 "nvme_admin": false, 00:07:25.696 "nvme_io": false, 00:07:25.696 "nvme_io_md": false, 00:07:25.696 "write_zeroes": true, 00:07:25.696 "zcopy": true, 00:07:25.696 "get_zone_info": false, 00:07:25.696 "zone_management": false, 00:07:25.696 "zone_append": false, 00:07:25.696 "compare": false, 00:07:25.696 "compare_and_write": false, 00:07:25.696 "abort": true, 00:07:25.696 "seek_hole": false, 00:07:25.696 "seek_data": false, 00:07:25.696 "copy": true, 00:07:25.696 "nvme_iov_md": false 00:07:25.696 }, 00:07:25.696 "memory_domains": [ 00:07:25.696 { 00:07:25.696 "dma_device_id": "system", 00:07:25.696 "dma_device_type": 1 00:07:25.696 }, 00:07:25.696 { 00:07:25.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.696 "dma_device_type": 2 00:07:25.696 } 00:07:25.696 ], 00:07:25.696 "driver_specific": {} 00:07:25.696 } 00:07:25.696 ] 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.696 "name": "Existed_Raid", 00:07:25.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.696 "strip_size_kb": 0, 00:07:25.696 "state": "configuring", 00:07:25.696 "raid_level": "raid1", 00:07:25.696 "superblock": false, 00:07:25.696 "num_base_bdevs": 3, 00:07:25.696 "num_base_bdevs_discovered": 2, 00:07:25.696 "num_base_bdevs_operational": 3, 00:07:25.696 "base_bdevs_list": [ 00:07:25.696 { 00:07:25.696 "name": "BaseBdev1", 00:07:25.696 "uuid": "f7a3d073-cd03-446d-950a-822eea275f3a", 00:07:25.696 "is_configured": true, 00:07:25.696 "data_offset": 0, 00:07:25.696 "data_size": 65536 00:07:25.696 }, 00:07:25.696 { 00:07:25.696 "name": null, 00:07:25.696 "uuid": "586709f5-577f-449d-9da5-d3f35f14480c", 00:07:25.696 "is_configured": false, 00:07:25.696 "data_offset": 0, 00:07:25.696 "data_size": 65536 00:07:25.696 }, 00:07:25.696 { 00:07:25.696 "name": "BaseBdev3", 00:07:25.696 "uuid": "0c199554-9529-4907-8998-155d3a75ec37", 00:07:25.696 "is_configured": true, 00:07:25.696 "data_offset": 0, 00:07:25.696 "data_size": 65536 00:07:25.696 } 00:07:25.696 ] 00:07:25.696 }' 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.696 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.956 [2024-11-26 19:48:16.737606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.956 "name": "Existed_Raid", 00:07:25.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.956 "strip_size_kb": 0, 00:07:25.956 "state": "configuring", 00:07:25.956 "raid_level": "raid1", 00:07:25.956 "superblock": false, 00:07:25.956 "num_base_bdevs": 3, 00:07:25.956 "num_base_bdevs_discovered": 1, 00:07:25.956 "num_base_bdevs_operational": 3, 00:07:25.956 "base_bdevs_list": [ 00:07:25.956 { 00:07:25.956 "name": "BaseBdev1", 00:07:25.956 "uuid": "f7a3d073-cd03-446d-950a-822eea275f3a", 00:07:25.956 "is_configured": true, 00:07:25.956 "data_offset": 0, 00:07:25.956 "data_size": 65536 00:07:25.956 }, 00:07:25.956 { 00:07:25.956 "name": null, 00:07:25.956 "uuid": "586709f5-577f-449d-9da5-d3f35f14480c", 00:07:25.956 "is_configured": false, 00:07:25.956 "data_offset": 0, 00:07:25.956 "data_size": 65536 00:07:25.956 }, 00:07:25.956 { 00:07:25.956 "name": null, 00:07:25.956 "uuid": "0c199554-9529-4907-8998-155d3a75ec37", 00:07:25.956 "is_configured": false, 00:07:25.956 "data_offset": 0, 00:07:25.956 "data_size": 65536 00:07:25.956 } 00:07:25.956 ] 00:07:25.956 }' 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.956 19:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.215 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:26.215 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.215 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.215 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.215 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.215 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:26.215 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:26.215 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.215 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.473 [2024-11-26 19:48:17.153733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.473 "name": "Existed_Raid", 00:07:26.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.473 "strip_size_kb": 0, 00:07:26.473 "state": "configuring", 00:07:26.473 "raid_level": "raid1", 00:07:26.473 "superblock": false, 00:07:26.473 "num_base_bdevs": 3, 00:07:26.473 "num_base_bdevs_discovered": 2, 00:07:26.473 "num_base_bdevs_operational": 3, 00:07:26.473 "base_bdevs_list": [ 00:07:26.473 { 00:07:26.473 "name": "BaseBdev1", 00:07:26.473 "uuid": "f7a3d073-cd03-446d-950a-822eea275f3a", 00:07:26.473 "is_configured": true, 00:07:26.473 "data_offset": 0, 00:07:26.473 "data_size": 65536 00:07:26.473 }, 00:07:26.473 { 00:07:26.473 "name": null, 00:07:26.473 "uuid": "586709f5-577f-449d-9da5-d3f35f14480c", 00:07:26.473 "is_configured": false, 00:07:26.473 "data_offset": 0, 00:07:26.473 "data_size": 65536 00:07:26.473 }, 00:07:26.473 { 00:07:26.473 "name": "BaseBdev3", 00:07:26.473 "uuid": "0c199554-9529-4907-8998-155d3a75ec37", 00:07:26.473 "is_configured": true, 00:07:26.473 "data_offset": 0, 00:07:26.473 "data_size": 65536 00:07:26.473 } 00:07:26.473 ] 00:07:26.473 }' 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.473 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.731 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.731 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:26.731 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.731 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.731 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.731 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:26.731 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:26.731 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.731 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.732 [2024-11-26 19:48:17.517801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.732 "name": "Existed_Raid", 00:07:26.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.732 "strip_size_kb": 0, 00:07:26.732 "state": "configuring", 00:07:26.732 "raid_level": "raid1", 00:07:26.732 "superblock": false, 00:07:26.732 "num_base_bdevs": 3, 00:07:26.732 "num_base_bdevs_discovered": 1, 00:07:26.732 "num_base_bdevs_operational": 3, 00:07:26.732 "base_bdevs_list": [ 00:07:26.732 { 00:07:26.732 "name": null, 00:07:26.732 "uuid": "f7a3d073-cd03-446d-950a-822eea275f3a", 00:07:26.732 "is_configured": false, 00:07:26.732 "data_offset": 0, 00:07:26.732 "data_size": 65536 00:07:26.732 }, 00:07:26.732 { 00:07:26.732 "name": null, 00:07:26.732 "uuid": "586709f5-577f-449d-9da5-d3f35f14480c", 00:07:26.732 "is_configured": false, 00:07:26.732 "data_offset": 0, 00:07:26.732 "data_size": 65536 00:07:26.732 }, 00:07:26.732 { 00:07:26.732 "name": "BaseBdev3", 00:07:26.732 "uuid": "0c199554-9529-4907-8998-155d3a75ec37", 00:07:26.732 "is_configured": true, 00:07:26.732 "data_offset": 0, 00:07:26.732 "data_size": 65536 00:07:26.732 } 00:07:26.732 ] 00:07:26.732 }' 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.732 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.990 [2024-11-26 19:48:17.904452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.990 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.248 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.248 "name": "Existed_Raid", 00:07:27.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.248 "strip_size_kb": 0, 00:07:27.248 "state": "configuring", 00:07:27.248 "raid_level": "raid1", 00:07:27.248 "superblock": false, 00:07:27.248 "num_base_bdevs": 3, 00:07:27.248 "num_base_bdevs_discovered": 2, 00:07:27.248 "num_base_bdevs_operational": 3, 00:07:27.248 "base_bdevs_list": [ 00:07:27.248 { 00:07:27.248 "name": null, 00:07:27.248 "uuid": "f7a3d073-cd03-446d-950a-822eea275f3a", 00:07:27.248 "is_configured": false, 00:07:27.248 "data_offset": 0, 00:07:27.248 "data_size": 65536 00:07:27.248 }, 00:07:27.248 { 00:07:27.248 "name": "BaseBdev2", 00:07:27.248 "uuid": "586709f5-577f-449d-9da5-d3f35f14480c", 00:07:27.248 "is_configured": true, 00:07:27.248 "data_offset": 0, 00:07:27.248 "data_size": 65536 00:07:27.248 }, 00:07:27.248 { 00:07:27.248 "name": "BaseBdev3", 00:07:27.248 "uuid": "0c199554-9529-4907-8998-155d3a75ec37", 00:07:27.248 "is_configured": true, 00:07:27.248 "data_offset": 0, 00:07:27.248 "data_size": 65536 00:07:27.248 } 00:07:27.248 ] 00:07:27.248 }' 00:07:27.248 19:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.248 19:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f7a3d073-cd03-446d-950a-822eea275f3a 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.507 [2024-11-26 19:48:18.325093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:27.507 [2024-11-26 19:48:18.325145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:27.507 [2024-11-26 19:48:18.325152] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:27.507 [2024-11-26 19:48:18.325397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:27.507 [2024-11-26 19:48:18.325525] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:27.507 [2024-11-26 19:48:18.325534] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:27.507 NewBaseBdev 00:07:27.507 [2024-11-26 19:48:18.325744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:27.507 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.508 [ 00:07:27.508 { 00:07:27.508 "name": "NewBaseBdev", 00:07:27.508 "aliases": [ 00:07:27.508 "f7a3d073-cd03-446d-950a-822eea275f3a" 00:07:27.508 ], 00:07:27.508 "product_name": "Malloc disk", 00:07:27.508 "block_size": 512, 00:07:27.508 "num_blocks": 65536, 00:07:27.508 "uuid": "f7a3d073-cd03-446d-950a-822eea275f3a", 00:07:27.508 "assigned_rate_limits": { 00:07:27.508 "rw_ios_per_sec": 0, 00:07:27.508 "rw_mbytes_per_sec": 0, 00:07:27.508 "r_mbytes_per_sec": 0, 00:07:27.508 "w_mbytes_per_sec": 0 00:07:27.508 }, 00:07:27.508 "claimed": true, 00:07:27.508 "claim_type": "exclusive_write", 00:07:27.508 "zoned": false, 00:07:27.508 "supported_io_types": { 00:07:27.508 "read": true, 00:07:27.508 "write": true, 00:07:27.508 "unmap": true, 00:07:27.508 "flush": true, 00:07:27.508 "reset": true, 00:07:27.508 "nvme_admin": false, 00:07:27.508 "nvme_io": false, 00:07:27.508 "nvme_io_md": false, 00:07:27.508 "write_zeroes": true, 00:07:27.508 "zcopy": true, 00:07:27.508 "get_zone_info": false, 00:07:27.508 "zone_management": false, 00:07:27.508 "zone_append": false, 00:07:27.508 "compare": false, 00:07:27.508 "compare_and_write": false, 00:07:27.508 "abort": true, 00:07:27.508 "seek_hole": false, 00:07:27.508 "seek_data": false, 00:07:27.508 "copy": true, 00:07:27.508 "nvme_iov_md": false 00:07:27.508 }, 00:07:27.508 "memory_domains": [ 00:07:27.508 { 00:07:27.508 "dma_device_id": "system", 00:07:27.508 "dma_device_type": 1 00:07:27.508 }, 00:07:27.508 { 00:07:27.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.508 "dma_device_type": 2 00:07:27.508 } 00:07:27.508 ], 00:07:27.508 "driver_specific": {} 00:07:27.508 } 00:07:27.508 ] 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.508 "name": "Existed_Raid", 00:07:27.508 "uuid": "803fc477-3761-4511-87cd-bc574ac7778d", 00:07:27.508 "strip_size_kb": 0, 00:07:27.508 "state": "online", 00:07:27.508 "raid_level": "raid1", 00:07:27.508 "superblock": false, 00:07:27.508 "num_base_bdevs": 3, 00:07:27.508 "num_base_bdevs_discovered": 3, 00:07:27.508 "num_base_bdevs_operational": 3, 00:07:27.508 "base_bdevs_list": [ 00:07:27.508 { 00:07:27.508 "name": "NewBaseBdev", 00:07:27.508 "uuid": "f7a3d073-cd03-446d-950a-822eea275f3a", 00:07:27.508 "is_configured": true, 00:07:27.508 "data_offset": 0, 00:07:27.508 "data_size": 65536 00:07:27.508 }, 00:07:27.508 { 00:07:27.508 "name": "BaseBdev2", 00:07:27.508 "uuid": "586709f5-577f-449d-9da5-d3f35f14480c", 00:07:27.508 "is_configured": true, 00:07:27.508 "data_offset": 0, 00:07:27.508 "data_size": 65536 00:07:27.508 }, 00:07:27.508 { 00:07:27.508 "name": "BaseBdev3", 00:07:27.508 "uuid": "0c199554-9529-4907-8998-155d3a75ec37", 00:07:27.508 "is_configured": true, 00:07:27.508 "data_offset": 0, 00:07:27.508 "data_size": 65536 00:07:27.508 } 00:07:27.508 ] 00:07:27.508 }' 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.508 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.766 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:27.766 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:27.766 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:27.766 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:27.766 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:27.766 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:27.766 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:27.766 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:27.766 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.766 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.766 [2024-11-26 19:48:18.677525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.766 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.766 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:27.766 "name": "Existed_Raid", 00:07:27.766 "aliases": [ 00:07:27.766 "803fc477-3761-4511-87cd-bc574ac7778d" 00:07:27.766 ], 00:07:27.766 "product_name": "Raid Volume", 00:07:27.766 "block_size": 512, 00:07:27.766 "num_blocks": 65536, 00:07:27.766 "uuid": "803fc477-3761-4511-87cd-bc574ac7778d", 00:07:27.766 "assigned_rate_limits": { 00:07:27.766 "rw_ios_per_sec": 0, 00:07:27.766 "rw_mbytes_per_sec": 0, 00:07:27.766 "r_mbytes_per_sec": 0, 00:07:27.766 "w_mbytes_per_sec": 0 00:07:27.766 }, 00:07:27.766 "claimed": false, 00:07:27.766 "zoned": false, 00:07:27.767 "supported_io_types": { 00:07:27.767 "read": true, 00:07:27.767 "write": true, 00:07:27.767 "unmap": false, 00:07:27.767 "flush": false, 00:07:27.767 "reset": true, 00:07:27.767 "nvme_admin": false, 00:07:27.767 "nvme_io": false, 00:07:27.767 "nvme_io_md": false, 00:07:27.767 "write_zeroes": true, 00:07:27.767 "zcopy": false, 00:07:27.767 "get_zone_info": false, 00:07:27.767 "zone_management": false, 00:07:27.767 "zone_append": false, 00:07:27.767 "compare": false, 00:07:27.767 "compare_and_write": false, 00:07:27.767 "abort": false, 00:07:27.767 "seek_hole": false, 00:07:27.767 "seek_data": false, 00:07:27.767 "copy": false, 00:07:27.767 "nvme_iov_md": false 00:07:27.767 }, 00:07:27.767 "memory_domains": [ 00:07:27.767 { 00:07:27.767 "dma_device_id": "system", 00:07:27.767 "dma_device_type": 1 00:07:27.767 }, 00:07:27.767 { 00:07:27.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.767 "dma_device_type": 2 00:07:27.767 }, 00:07:27.767 { 00:07:27.767 "dma_device_id": "system", 00:07:27.767 "dma_device_type": 1 00:07:27.767 }, 00:07:27.767 { 00:07:27.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.767 "dma_device_type": 2 00:07:27.767 }, 00:07:27.767 { 00:07:27.767 "dma_device_id": "system", 00:07:27.767 "dma_device_type": 1 00:07:27.767 }, 00:07:27.767 { 00:07:27.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.767 "dma_device_type": 2 00:07:27.767 } 00:07:27.767 ], 00:07:27.767 "driver_specific": { 00:07:27.767 "raid": { 00:07:27.767 "uuid": "803fc477-3761-4511-87cd-bc574ac7778d", 00:07:27.767 "strip_size_kb": 0, 00:07:27.767 "state": "online", 00:07:27.767 "raid_level": "raid1", 00:07:27.767 "superblock": false, 00:07:27.767 "num_base_bdevs": 3, 00:07:27.767 "num_base_bdevs_discovered": 3, 00:07:27.767 "num_base_bdevs_operational": 3, 00:07:27.767 "base_bdevs_list": [ 00:07:27.767 { 00:07:27.767 "name": "NewBaseBdev", 00:07:27.767 "uuid": "f7a3d073-cd03-446d-950a-822eea275f3a", 00:07:27.767 "is_configured": true, 00:07:27.767 "data_offset": 0, 00:07:27.767 "data_size": 65536 00:07:27.767 }, 00:07:27.767 { 00:07:27.767 "name": "BaseBdev2", 00:07:27.767 "uuid": "586709f5-577f-449d-9da5-d3f35f14480c", 00:07:27.767 "is_configured": true, 00:07:27.767 "data_offset": 0, 00:07:27.767 "data_size": 65536 00:07:27.767 }, 00:07:27.767 { 00:07:27.767 "name": "BaseBdev3", 00:07:27.767 "uuid": "0c199554-9529-4907-8998-155d3a75ec37", 00:07:27.767 "is_configured": true, 00:07:27.767 "data_offset": 0, 00:07:27.767 "data_size": 65536 00:07:27.767 } 00:07:27.767 ] 00:07:27.767 } 00:07:27.767 } 00:07:27.767 }' 00:07:27.767 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:28.024 BaseBdev2 00:07:28.024 BaseBdev3' 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.024 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.025 [2024-11-26 19:48:18.849254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.025 [2024-11-26 19:48:18.849296] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.025 [2024-11-26 19:48:18.849387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.025 [2024-11-26 19:48:18.849647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.025 [2024-11-26 19:48:18.849662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65798 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65798 ']' 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65798 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65798 00:07:28.025 killing process with pid 65798 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65798' 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65798 00:07:28.025 19:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65798 00:07:28.025 [2024-11-26 19:48:18.881977] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.282 [2024-11-26 19:48:19.042637] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.846 ************************************ 00:07:28.846 END TEST raid_state_function_test 00:07:28.846 ************************************ 00:07:28.846 19:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:28.846 00:07:28.846 real 0m7.719s 00:07:28.846 user 0m12.357s 00:07:28.846 sys 0m1.333s 00:07:28.846 19:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.846 19:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.846 19:48:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:07:28.846 19:48:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:28.846 19:48:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.846 19:48:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.846 ************************************ 00:07:28.846 START TEST raid_state_function_test_sb 00:07:28.847 ************************************ 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66396 00:07:28.847 Process raid pid: 66396 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66396' 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66396 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66396 ']' 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.847 19:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:29.104 [2024-11-26 19:48:19.789231] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:29.104 [2024-11-26 19:48:19.789369] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.104 [2024-11-26 19:48:19.950330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.362 [2024-11-26 19:48:20.051496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.362 [2024-11-26 19:48:20.188013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.362 [2024-11-26 19:48:20.188052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.927 [2024-11-26 19:48:20.646974] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:29.927 [2024-11-26 19:48:20.647026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:29.927 [2024-11-26 19:48:20.647036] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.927 [2024-11-26 19:48:20.647046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.927 [2024-11-26 19:48:20.647052] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:29.927 [2024-11-26 19:48:20.647061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.927 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.928 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.928 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.928 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.928 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.928 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.928 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.928 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.928 "name": "Existed_Raid", 00:07:29.928 "uuid": "ae98698a-e7f0-44a0-b610-f9d1947a3b18", 00:07:29.928 "strip_size_kb": 0, 00:07:29.928 "state": "configuring", 00:07:29.928 "raid_level": "raid1", 00:07:29.928 "superblock": true, 00:07:29.928 "num_base_bdevs": 3, 00:07:29.928 "num_base_bdevs_discovered": 0, 00:07:29.928 "num_base_bdevs_operational": 3, 00:07:29.928 "base_bdevs_list": [ 00:07:29.928 { 00:07:29.928 "name": "BaseBdev1", 00:07:29.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.928 "is_configured": false, 00:07:29.928 "data_offset": 0, 00:07:29.928 "data_size": 0 00:07:29.928 }, 00:07:29.928 { 00:07:29.928 "name": "BaseBdev2", 00:07:29.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.928 "is_configured": false, 00:07:29.928 "data_offset": 0, 00:07:29.928 "data_size": 0 00:07:29.928 }, 00:07:29.928 { 00:07:29.928 "name": "BaseBdev3", 00:07:29.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.928 "is_configured": false, 00:07:29.928 "data_offset": 0, 00:07:29.928 "data_size": 0 00:07:29.928 } 00:07:29.928 ] 00:07:29.928 }' 00:07:29.928 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.928 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.186 [2024-11-26 19:48:20.946989] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:30.186 [2024-11-26 19:48:20.947027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.186 [2024-11-26 19:48:20.954977] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:30.186 [2024-11-26 19:48:20.955018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:30.186 [2024-11-26 19:48:20.955027] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:30.186 [2024-11-26 19:48:20.955037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:30.186 [2024-11-26 19:48:20.955043] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:30.186 [2024-11-26 19:48:20.955052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.186 [2024-11-26 19:48:20.987677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:30.186 BaseBdev1 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.186 19:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.186 [ 00:07:30.186 { 00:07:30.186 "name": "BaseBdev1", 00:07:30.186 "aliases": [ 00:07:30.186 "64e2164a-a2d6-40d0-8694-83ca8239893f" 00:07:30.186 ], 00:07:30.186 "product_name": "Malloc disk", 00:07:30.186 "block_size": 512, 00:07:30.186 "num_blocks": 65536, 00:07:30.186 "uuid": "64e2164a-a2d6-40d0-8694-83ca8239893f", 00:07:30.186 "assigned_rate_limits": { 00:07:30.186 "rw_ios_per_sec": 0, 00:07:30.186 "rw_mbytes_per_sec": 0, 00:07:30.186 "r_mbytes_per_sec": 0, 00:07:30.186 "w_mbytes_per_sec": 0 00:07:30.186 }, 00:07:30.186 "claimed": true, 00:07:30.186 "claim_type": "exclusive_write", 00:07:30.186 "zoned": false, 00:07:30.186 "supported_io_types": { 00:07:30.186 "read": true, 00:07:30.186 "write": true, 00:07:30.186 "unmap": true, 00:07:30.186 "flush": true, 00:07:30.186 "reset": true, 00:07:30.186 "nvme_admin": false, 00:07:30.186 "nvme_io": false, 00:07:30.186 "nvme_io_md": false, 00:07:30.186 "write_zeroes": true, 00:07:30.186 "zcopy": true, 00:07:30.186 "get_zone_info": false, 00:07:30.186 "zone_management": false, 00:07:30.186 "zone_append": false, 00:07:30.186 "compare": false, 00:07:30.186 "compare_and_write": false, 00:07:30.186 "abort": true, 00:07:30.186 "seek_hole": false, 00:07:30.186 "seek_data": false, 00:07:30.186 "copy": true, 00:07:30.186 "nvme_iov_md": false 00:07:30.186 }, 00:07:30.186 "memory_domains": [ 00:07:30.186 { 00:07:30.186 "dma_device_id": "system", 00:07:30.186 "dma_device_type": 1 00:07:30.186 }, 00:07:30.186 { 00:07:30.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.186 "dma_device_type": 2 00:07:30.186 } 00:07:30.186 ], 00:07:30.186 "driver_specific": {} 00:07:30.186 } 00:07:30.186 ] 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.186 "name": "Existed_Raid", 00:07:30.186 "uuid": "aa205cd5-fd96-4b6b-9f61-3b879ae631cf", 00:07:30.186 "strip_size_kb": 0, 00:07:30.186 "state": "configuring", 00:07:30.186 "raid_level": "raid1", 00:07:30.186 "superblock": true, 00:07:30.186 "num_base_bdevs": 3, 00:07:30.186 "num_base_bdevs_discovered": 1, 00:07:30.186 "num_base_bdevs_operational": 3, 00:07:30.186 "base_bdevs_list": [ 00:07:30.186 { 00:07:30.186 "name": "BaseBdev1", 00:07:30.186 "uuid": "64e2164a-a2d6-40d0-8694-83ca8239893f", 00:07:30.186 "is_configured": true, 00:07:30.186 "data_offset": 2048, 00:07:30.186 "data_size": 63488 00:07:30.186 }, 00:07:30.186 { 00:07:30.186 "name": "BaseBdev2", 00:07:30.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.186 "is_configured": false, 00:07:30.186 "data_offset": 0, 00:07:30.186 "data_size": 0 00:07:30.186 }, 00:07:30.186 { 00:07:30.186 "name": "BaseBdev3", 00:07:30.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.186 "is_configured": false, 00:07:30.186 "data_offset": 0, 00:07:30.186 "data_size": 0 00:07:30.186 } 00:07:30.186 ] 00:07:30.186 }' 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.186 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.443 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:30.443 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.443 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.443 [2024-11-26 19:48:21.307794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:30.444 [2024-11-26 19:48:21.307843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.444 [2024-11-26 19:48:21.315835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:30.444 [2024-11-26 19:48:21.317670] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:30.444 [2024-11-26 19:48:21.317708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:30.444 [2024-11-26 19:48:21.317717] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:30.444 [2024-11-26 19:48:21.317727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.444 "name": "Existed_Raid", 00:07:30.444 "uuid": "9686a9d1-1801-4cfc-b0ae-d1fc5ded03bc", 00:07:30.444 "strip_size_kb": 0, 00:07:30.444 "state": "configuring", 00:07:30.444 "raid_level": "raid1", 00:07:30.444 "superblock": true, 00:07:30.444 "num_base_bdevs": 3, 00:07:30.444 "num_base_bdevs_discovered": 1, 00:07:30.444 "num_base_bdevs_operational": 3, 00:07:30.444 "base_bdevs_list": [ 00:07:30.444 { 00:07:30.444 "name": "BaseBdev1", 00:07:30.444 "uuid": "64e2164a-a2d6-40d0-8694-83ca8239893f", 00:07:30.444 "is_configured": true, 00:07:30.444 "data_offset": 2048, 00:07:30.444 "data_size": 63488 00:07:30.444 }, 00:07:30.444 { 00:07:30.444 "name": "BaseBdev2", 00:07:30.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.444 "is_configured": false, 00:07:30.444 "data_offset": 0, 00:07:30.444 "data_size": 0 00:07:30.444 }, 00:07:30.444 { 00:07:30.444 "name": "BaseBdev3", 00:07:30.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.444 "is_configured": false, 00:07:30.444 "data_offset": 0, 00:07:30.444 "data_size": 0 00:07:30.444 } 00:07:30.444 ] 00:07:30.444 }' 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.444 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.704 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:30.704 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.704 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.962 [2024-11-26 19:48:21.662604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:30.962 BaseBdev2 00:07:30.962 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.962 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:30.962 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:30.962 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:30.962 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:30.962 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.963 [ 00:07:30.963 { 00:07:30.963 "name": "BaseBdev2", 00:07:30.963 "aliases": [ 00:07:30.963 "14f4e22c-e5a2-46bf-af24-e75d54d11cc5" 00:07:30.963 ], 00:07:30.963 "product_name": "Malloc disk", 00:07:30.963 "block_size": 512, 00:07:30.963 "num_blocks": 65536, 00:07:30.963 "uuid": "14f4e22c-e5a2-46bf-af24-e75d54d11cc5", 00:07:30.963 "assigned_rate_limits": { 00:07:30.963 "rw_ios_per_sec": 0, 00:07:30.963 "rw_mbytes_per_sec": 0, 00:07:30.963 "r_mbytes_per_sec": 0, 00:07:30.963 "w_mbytes_per_sec": 0 00:07:30.963 }, 00:07:30.963 "claimed": true, 00:07:30.963 "claim_type": "exclusive_write", 00:07:30.963 "zoned": false, 00:07:30.963 "supported_io_types": { 00:07:30.963 "read": true, 00:07:30.963 "write": true, 00:07:30.963 "unmap": true, 00:07:30.963 "flush": true, 00:07:30.963 "reset": true, 00:07:30.963 "nvme_admin": false, 00:07:30.963 "nvme_io": false, 00:07:30.963 "nvme_io_md": false, 00:07:30.963 "write_zeroes": true, 00:07:30.963 "zcopy": true, 00:07:30.963 "get_zone_info": false, 00:07:30.963 "zone_management": false, 00:07:30.963 "zone_append": false, 00:07:30.963 "compare": false, 00:07:30.963 "compare_and_write": false, 00:07:30.963 "abort": true, 00:07:30.963 "seek_hole": false, 00:07:30.963 "seek_data": false, 00:07:30.963 "copy": true, 00:07:30.963 "nvme_iov_md": false 00:07:30.963 }, 00:07:30.963 "memory_domains": [ 00:07:30.963 { 00:07:30.963 "dma_device_id": "system", 00:07:30.963 "dma_device_type": 1 00:07:30.963 }, 00:07:30.963 { 00:07:30.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.963 "dma_device_type": 2 00:07:30.963 } 00:07:30.963 ], 00:07:30.963 "driver_specific": {} 00:07:30.963 } 00:07:30.963 ] 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.963 "name": "Existed_Raid", 00:07:30.963 "uuid": "9686a9d1-1801-4cfc-b0ae-d1fc5ded03bc", 00:07:30.963 "strip_size_kb": 0, 00:07:30.963 "state": "configuring", 00:07:30.963 "raid_level": "raid1", 00:07:30.963 "superblock": true, 00:07:30.963 "num_base_bdevs": 3, 00:07:30.963 "num_base_bdevs_discovered": 2, 00:07:30.963 "num_base_bdevs_operational": 3, 00:07:30.963 "base_bdevs_list": [ 00:07:30.963 { 00:07:30.963 "name": "BaseBdev1", 00:07:30.963 "uuid": "64e2164a-a2d6-40d0-8694-83ca8239893f", 00:07:30.963 "is_configured": true, 00:07:30.963 "data_offset": 2048, 00:07:30.963 "data_size": 63488 00:07:30.963 }, 00:07:30.963 { 00:07:30.963 "name": "BaseBdev2", 00:07:30.963 "uuid": "14f4e22c-e5a2-46bf-af24-e75d54d11cc5", 00:07:30.963 "is_configured": true, 00:07:30.963 "data_offset": 2048, 00:07:30.963 "data_size": 63488 00:07:30.963 }, 00:07:30.963 { 00:07:30.963 "name": "BaseBdev3", 00:07:30.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.963 "is_configured": false, 00:07:30.963 "data_offset": 0, 00:07:30.963 "data_size": 0 00:07:30.963 } 00:07:30.963 ] 00:07:30.963 }' 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.963 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.221 19:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:31.221 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.221 19:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.221 [2024-11-26 19:48:22.021528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:31.221 [2024-11-26 19:48:22.021806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:31.221 [2024-11-26 19:48:22.021827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:31.221 [2024-11-26 19:48:22.022110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:31.221 [2024-11-26 19:48:22.022271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:31.221 [2024-11-26 19:48:22.022287] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:31.221 [2024-11-26 19:48:22.022447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.221 BaseBdev3 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.221 [ 00:07:31.221 { 00:07:31.221 "name": "BaseBdev3", 00:07:31.221 "aliases": [ 00:07:31.221 "2178e827-4d3d-4359-8fcc-145ac4d0d200" 00:07:31.221 ], 00:07:31.221 "product_name": "Malloc disk", 00:07:31.221 "block_size": 512, 00:07:31.221 "num_blocks": 65536, 00:07:31.221 "uuid": "2178e827-4d3d-4359-8fcc-145ac4d0d200", 00:07:31.221 "assigned_rate_limits": { 00:07:31.221 "rw_ios_per_sec": 0, 00:07:31.221 "rw_mbytes_per_sec": 0, 00:07:31.221 "r_mbytes_per_sec": 0, 00:07:31.221 "w_mbytes_per_sec": 0 00:07:31.221 }, 00:07:31.221 "claimed": true, 00:07:31.221 "claim_type": "exclusive_write", 00:07:31.221 "zoned": false, 00:07:31.221 "supported_io_types": { 00:07:31.221 "read": true, 00:07:31.221 "write": true, 00:07:31.221 "unmap": true, 00:07:31.221 "flush": true, 00:07:31.221 "reset": true, 00:07:31.221 "nvme_admin": false, 00:07:31.221 "nvme_io": false, 00:07:31.221 "nvme_io_md": false, 00:07:31.221 "write_zeroes": true, 00:07:31.221 "zcopy": true, 00:07:31.221 "get_zone_info": false, 00:07:31.221 "zone_management": false, 00:07:31.221 "zone_append": false, 00:07:31.221 "compare": false, 00:07:31.221 "compare_and_write": false, 00:07:31.221 "abort": true, 00:07:31.221 "seek_hole": false, 00:07:31.221 "seek_data": false, 00:07:31.221 "copy": true, 00:07:31.221 "nvme_iov_md": false 00:07:31.221 }, 00:07:31.221 "memory_domains": [ 00:07:31.221 { 00:07:31.221 "dma_device_id": "system", 00:07:31.221 "dma_device_type": 1 00:07:31.221 }, 00:07:31.221 { 00:07:31.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.221 "dma_device_type": 2 00:07:31.221 } 00:07:31.221 ], 00:07:31.221 "driver_specific": {} 00:07:31.221 } 00:07:31.221 ] 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.221 "name": "Existed_Raid", 00:07:31.221 "uuid": "9686a9d1-1801-4cfc-b0ae-d1fc5ded03bc", 00:07:31.221 "strip_size_kb": 0, 00:07:31.221 "state": "online", 00:07:31.221 "raid_level": "raid1", 00:07:31.221 "superblock": true, 00:07:31.221 "num_base_bdevs": 3, 00:07:31.221 "num_base_bdevs_discovered": 3, 00:07:31.221 "num_base_bdevs_operational": 3, 00:07:31.221 "base_bdevs_list": [ 00:07:31.221 { 00:07:31.221 "name": "BaseBdev1", 00:07:31.221 "uuid": "64e2164a-a2d6-40d0-8694-83ca8239893f", 00:07:31.221 "is_configured": true, 00:07:31.221 "data_offset": 2048, 00:07:31.221 "data_size": 63488 00:07:31.221 }, 00:07:31.221 { 00:07:31.221 "name": "BaseBdev2", 00:07:31.221 "uuid": "14f4e22c-e5a2-46bf-af24-e75d54d11cc5", 00:07:31.221 "is_configured": true, 00:07:31.221 "data_offset": 2048, 00:07:31.221 "data_size": 63488 00:07:31.221 }, 00:07:31.221 { 00:07:31.221 "name": "BaseBdev3", 00:07:31.221 "uuid": "2178e827-4d3d-4359-8fcc-145ac4d0d200", 00:07:31.221 "is_configured": true, 00:07:31.221 "data_offset": 2048, 00:07:31.221 "data_size": 63488 00:07:31.221 } 00:07:31.221 ] 00:07:31.221 }' 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.221 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.479 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:31.479 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:31.479 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:31.479 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:31.479 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:31.479 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:31.479 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:31.479 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:31.479 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.479 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.479 [2024-11-26 19:48:22.386013] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.479 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.479 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:31.479 "name": "Existed_Raid", 00:07:31.479 "aliases": [ 00:07:31.479 "9686a9d1-1801-4cfc-b0ae-d1fc5ded03bc" 00:07:31.479 ], 00:07:31.479 "product_name": "Raid Volume", 00:07:31.479 "block_size": 512, 00:07:31.479 "num_blocks": 63488, 00:07:31.479 "uuid": "9686a9d1-1801-4cfc-b0ae-d1fc5ded03bc", 00:07:31.479 "assigned_rate_limits": { 00:07:31.479 "rw_ios_per_sec": 0, 00:07:31.479 "rw_mbytes_per_sec": 0, 00:07:31.479 "r_mbytes_per_sec": 0, 00:07:31.479 "w_mbytes_per_sec": 0 00:07:31.479 }, 00:07:31.479 "claimed": false, 00:07:31.479 "zoned": false, 00:07:31.479 "supported_io_types": { 00:07:31.479 "read": true, 00:07:31.479 "write": true, 00:07:31.479 "unmap": false, 00:07:31.479 "flush": false, 00:07:31.479 "reset": true, 00:07:31.479 "nvme_admin": false, 00:07:31.479 "nvme_io": false, 00:07:31.479 "nvme_io_md": false, 00:07:31.479 "write_zeroes": true, 00:07:31.479 "zcopy": false, 00:07:31.479 "get_zone_info": false, 00:07:31.479 "zone_management": false, 00:07:31.479 "zone_append": false, 00:07:31.479 "compare": false, 00:07:31.479 "compare_and_write": false, 00:07:31.479 "abort": false, 00:07:31.479 "seek_hole": false, 00:07:31.479 "seek_data": false, 00:07:31.479 "copy": false, 00:07:31.479 "nvme_iov_md": false 00:07:31.479 }, 00:07:31.479 "memory_domains": [ 00:07:31.479 { 00:07:31.479 "dma_device_id": "system", 00:07:31.479 "dma_device_type": 1 00:07:31.479 }, 00:07:31.479 { 00:07:31.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.479 "dma_device_type": 2 00:07:31.479 }, 00:07:31.479 { 00:07:31.479 "dma_device_id": "system", 00:07:31.479 "dma_device_type": 1 00:07:31.479 }, 00:07:31.479 { 00:07:31.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.479 "dma_device_type": 2 00:07:31.479 }, 00:07:31.479 { 00:07:31.479 "dma_device_id": "system", 00:07:31.479 "dma_device_type": 1 00:07:31.479 }, 00:07:31.479 { 00:07:31.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.479 "dma_device_type": 2 00:07:31.479 } 00:07:31.479 ], 00:07:31.479 "driver_specific": { 00:07:31.479 "raid": { 00:07:31.479 "uuid": "9686a9d1-1801-4cfc-b0ae-d1fc5ded03bc", 00:07:31.479 "strip_size_kb": 0, 00:07:31.479 "state": "online", 00:07:31.479 "raid_level": "raid1", 00:07:31.479 "superblock": true, 00:07:31.479 "num_base_bdevs": 3, 00:07:31.479 "num_base_bdevs_discovered": 3, 00:07:31.479 "num_base_bdevs_operational": 3, 00:07:31.479 "base_bdevs_list": [ 00:07:31.479 { 00:07:31.479 "name": "BaseBdev1", 00:07:31.479 "uuid": "64e2164a-a2d6-40d0-8694-83ca8239893f", 00:07:31.479 "is_configured": true, 00:07:31.479 "data_offset": 2048, 00:07:31.479 "data_size": 63488 00:07:31.479 }, 00:07:31.479 { 00:07:31.479 "name": "BaseBdev2", 00:07:31.479 "uuid": "14f4e22c-e5a2-46bf-af24-e75d54d11cc5", 00:07:31.479 "is_configured": true, 00:07:31.479 "data_offset": 2048, 00:07:31.479 "data_size": 63488 00:07:31.479 }, 00:07:31.479 { 00:07:31.479 "name": "BaseBdev3", 00:07:31.479 "uuid": "2178e827-4d3d-4359-8fcc-145ac4d0d200", 00:07:31.479 "is_configured": true, 00:07:31.479 "data_offset": 2048, 00:07:31.479 "data_size": 63488 00:07:31.479 } 00:07:31.479 ] 00:07:31.479 } 00:07:31.479 } 00:07:31.479 }' 00:07:31.479 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:31.737 BaseBdev2 00:07:31.737 BaseBdev3' 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.737 [2024-11-26 19:48:22.577770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.737 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.996 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.996 "name": "Existed_Raid", 00:07:31.996 "uuid": "9686a9d1-1801-4cfc-b0ae-d1fc5ded03bc", 00:07:31.996 "strip_size_kb": 0, 00:07:31.996 "state": "online", 00:07:31.996 "raid_level": "raid1", 00:07:31.996 "superblock": true, 00:07:31.996 "num_base_bdevs": 3, 00:07:31.996 "num_base_bdevs_discovered": 2, 00:07:31.996 "num_base_bdevs_operational": 2, 00:07:31.996 "base_bdevs_list": [ 00:07:31.996 { 00:07:31.996 "name": null, 00:07:31.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.996 "is_configured": false, 00:07:31.996 "data_offset": 0, 00:07:31.996 "data_size": 63488 00:07:31.996 }, 00:07:31.996 { 00:07:31.996 "name": "BaseBdev2", 00:07:31.996 "uuid": "14f4e22c-e5a2-46bf-af24-e75d54d11cc5", 00:07:31.996 "is_configured": true, 00:07:31.996 "data_offset": 2048, 00:07:31.996 "data_size": 63488 00:07:31.996 }, 00:07:31.996 { 00:07:31.996 "name": "BaseBdev3", 00:07:31.996 "uuid": "2178e827-4d3d-4359-8fcc-145ac4d0d200", 00:07:31.996 "is_configured": true, 00:07:31.996 "data_offset": 2048, 00:07:31.996 "data_size": 63488 00:07:31.996 } 00:07:31.996 ] 00:07:31.996 }' 00:07:31.996 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.996 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.255 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:32.255 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:32.255 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.255 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:32.255 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.255 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.255 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.255 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:32.255 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:32.255 19:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:32.255 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.255 19:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.255 [2024-11-26 19:48:22.984249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.255 [2024-11-26 19:48:23.107330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:32.255 [2024-11-26 19:48:23.107605] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.255 [2024-11-26 19:48:23.170271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.255 [2024-11-26 19:48:23.170330] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.255 [2024-11-26 19:48:23.170363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.255 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.513 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:32.513 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.514 BaseBdev2 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.514 [ 00:07:32.514 { 00:07:32.514 "name": "BaseBdev2", 00:07:32.514 "aliases": [ 00:07:32.514 "f1d8e18b-cb39-4b9c-8e42-429ea3307c02" 00:07:32.514 ], 00:07:32.514 "product_name": "Malloc disk", 00:07:32.514 "block_size": 512, 00:07:32.514 "num_blocks": 65536, 00:07:32.514 "uuid": "f1d8e18b-cb39-4b9c-8e42-429ea3307c02", 00:07:32.514 "assigned_rate_limits": { 00:07:32.514 "rw_ios_per_sec": 0, 00:07:32.514 "rw_mbytes_per_sec": 0, 00:07:32.514 "r_mbytes_per_sec": 0, 00:07:32.514 "w_mbytes_per_sec": 0 00:07:32.514 }, 00:07:32.514 "claimed": false, 00:07:32.514 "zoned": false, 00:07:32.514 "supported_io_types": { 00:07:32.514 "read": true, 00:07:32.514 "write": true, 00:07:32.514 "unmap": true, 00:07:32.514 "flush": true, 00:07:32.514 "reset": true, 00:07:32.514 "nvme_admin": false, 00:07:32.514 "nvme_io": false, 00:07:32.514 "nvme_io_md": false, 00:07:32.514 "write_zeroes": true, 00:07:32.514 "zcopy": true, 00:07:32.514 "get_zone_info": false, 00:07:32.514 "zone_management": false, 00:07:32.514 "zone_append": false, 00:07:32.514 "compare": false, 00:07:32.514 "compare_and_write": false, 00:07:32.514 "abort": true, 00:07:32.514 "seek_hole": false, 00:07:32.514 "seek_data": false, 00:07:32.514 "copy": true, 00:07:32.514 "nvme_iov_md": false 00:07:32.514 }, 00:07:32.514 "memory_domains": [ 00:07:32.514 { 00:07:32.514 "dma_device_id": "system", 00:07:32.514 "dma_device_type": 1 00:07:32.514 }, 00:07:32.514 { 00:07:32.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.514 "dma_device_type": 2 00:07:32.514 } 00:07:32.514 ], 00:07:32.514 "driver_specific": {} 00:07:32.514 } 00:07:32.514 ] 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.514 BaseBdev3 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.514 [ 00:07:32.514 { 00:07:32.514 "name": "BaseBdev3", 00:07:32.514 "aliases": [ 00:07:32.514 "ed16b7b7-3f49-4da9-89b8-9c65a9313eb7" 00:07:32.514 ], 00:07:32.514 "product_name": "Malloc disk", 00:07:32.514 "block_size": 512, 00:07:32.514 "num_blocks": 65536, 00:07:32.514 "uuid": "ed16b7b7-3f49-4da9-89b8-9c65a9313eb7", 00:07:32.514 "assigned_rate_limits": { 00:07:32.514 "rw_ios_per_sec": 0, 00:07:32.514 "rw_mbytes_per_sec": 0, 00:07:32.514 "r_mbytes_per_sec": 0, 00:07:32.514 "w_mbytes_per_sec": 0 00:07:32.514 }, 00:07:32.514 "claimed": false, 00:07:32.514 "zoned": false, 00:07:32.514 "supported_io_types": { 00:07:32.514 "read": true, 00:07:32.514 "write": true, 00:07:32.514 "unmap": true, 00:07:32.514 "flush": true, 00:07:32.514 "reset": true, 00:07:32.514 "nvme_admin": false, 00:07:32.514 "nvme_io": false, 00:07:32.514 "nvme_io_md": false, 00:07:32.514 "write_zeroes": true, 00:07:32.514 "zcopy": true, 00:07:32.514 "get_zone_info": false, 00:07:32.514 "zone_management": false, 00:07:32.514 "zone_append": false, 00:07:32.514 "compare": false, 00:07:32.514 "compare_and_write": false, 00:07:32.514 "abort": true, 00:07:32.514 "seek_hole": false, 00:07:32.514 "seek_data": false, 00:07:32.514 "copy": true, 00:07:32.514 "nvme_iov_md": false 00:07:32.514 }, 00:07:32.514 "memory_domains": [ 00:07:32.514 { 00:07:32.514 "dma_device_id": "system", 00:07:32.514 "dma_device_type": 1 00:07:32.514 }, 00:07:32.514 { 00:07:32.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.514 "dma_device_type": 2 00:07:32.514 } 00:07:32.514 ], 00:07:32.514 "driver_specific": {} 00:07:32.514 } 00:07:32.514 ] 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.514 [2024-11-26 19:48:23.331186] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.514 [2024-11-26 19:48:23.331362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.514 [2024-11-26 19:48:23.331447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:32.514 [2024-11-26 19:48:23.333509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.514 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.515 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.515 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.515 "name": "Existed_Raid", 00:07:32.515 "uuid": "7642ea3b-6d69-4224-b1c9-4d52d57391b5", 00:07:32.515 "strip_size_kb": 0, 00:07:32.515 "state": "configuring", 00:07:32.515 "raid_level": "raid1", 00:07:32.515 "superblock": true, 00:07:32.515 "num_base_bdevs": 3, 00:07:32.515 "num_base_bdevs_discovered": 2, 00:07:32.515 "num_base_bdevs_operational": 3, 00:07:32.515 "base_bdevs_list": [ 00:07:32.515 { 00:07:32.515 "name": "BaseBdev1", 00:07:32.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.515 "is_configured": false, 00:07:32.515 "data_offset": 0, 00:07:32.515 "data_size": 0 00:07:32.515 }, 00:07:32.515 { 00:07:32.515 "name": "BaseBdev2", 00:07:32.515 "uuid": "f1d8e18b-cb39-4b9c-8e42-429ea3307c02", 00:07:32.515 "is_configured": true, 00:07:32.515 "data_offset": 2048, 00:07:32.515 "data_size": 63488 00:07:32.515 }, 00:07:32.515 { 00:07:32.515 "name": "BaseBdev3", 00:07:32.515 "uuid": "ed16b7b7-3f49-4da9-89b8-9c65a9313eb7", 00:07:32.515 "is_configured": true, 00:07:32.515 "data_offset": 2048, 00:07:32.515 "data_size": 63488 00:07:32.515 } 00:07:32.515 ] 00:07:32.515 }' 00:07:32.515 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.515 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.773 [2024-11-26 19:48:23.655290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.773 "name": "Existed_Raid", 00:07:32.773 "uuid": "7642ea3b-6d69-4224-b1c9-4d52d57391b5", 00:07:32.773 "strip_size_kb": 0, 00:07:32.773 "state": "configuring", 00:07:32.773 "raid_level": "raid1", 00:07:32.773 "superblock": true, 00:07:32.773 "num_base_bdevs": 3, 00:07:32.773 "num_base_bdevs_discovered": 1, 00:07:32.773 "num_base_bdevs_operational": 3, 00:07:32.773 "base_bdevs_list": [ 00:07:32.773 { 00:07:32.773 "name": "BaseBdev1", 00:07:32.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.773 "is_configured": false, 00:07:32.773 "data_offset": 0, 00:07:32.773 "data_size": 0 00:07:32.773 }, 00:07:32.773 { 00:07:32.773 "name": null, 00:07:32.773 "uuid": "f1d8e18b-cb39-4b9c-8e42-429ea3307c02", 00:07:32.773 "is_configured": false, 00:07:32.773 "data_offset": 0, 00:07:32.773 "data_size": 63488 00:07:32.773 }, 00:07:32.773 { 00:07:32.773 "name": "BaseBdev3", 00:07:32.773 "uuid": "ed16b7b7-3f49-4da9-89b8-9c65a9313eb7", 00:07:32.773 "is_configured": true, 00:07:32.773 "data_offset": 2048, 00:07:32.773 "data_size": 63488 00:07:32.773 } 00:07:32.773 ] 00:07:32.773 }' 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.773 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.340 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.340 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:33.340 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.340 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.340 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.340 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:33.340 19:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:33.340 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.340 19:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.340 [2024-11-26 19:48:24.028691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.340 BaseBdev1 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.340 [ 00:07:33.340 { 00:07:33.340 "name": "BaseBdev1", 00:07:33.340 "aliases": [ 00:07:33.340 "1efaddc3-ba2e-4ca5-bcfd-63a3e26e0ccf" 00:07:33.340 ], 00:07:33.340 "product_name": "Malloc disk", 00:07:33.340 "block_size": 512, 00:07:33.340 "num_blocks": 65536, 00:07:33.340 "uuid": "1efaddc3-ba2e-4ca5-bcfd-63a3e26e0ccf", 00:07:33.340 "assigned_rate_limits": { 00:07:33.340 "rw_ios_per_sec": 0, 00:07:33.340 "rw_mbytes_per_sec": 0, 00:07:33.340 "r_mbytes_per_sec": 0, 00:07:33.340 "w_mbytes_per_sec": 0 00:07:33.340 }, 00:07:33.340 "claimed": true, 00:07:33.340 "claim_type": "exclusive_write", 00:07:33.340 "zoned": false, 00:07:33.340 "supported_io_types": { 00:07:33.340 "read": true, 00:07:33.340 "write": true, 00:07:33.340 "unmap": true, 00:07:33.340 "flush": true, 00:07:33.340 "reset": true, 00:07:33.340 "nvme_admin": false, 00:07:33.340 "nvme_io": false, 00:07:33.340 "nvme_io_md": false, 00:07:33.340 "write_zeroes": true, 00:07:33.340 "zcopy": true, 00:07:33.340 "get_zone_info": false, 00:07:33.340 "zone_management": false, 00:07:33.340 "zone_append": false, 00:07:33.340 "compare": false, 00:07:33.340 "compare_and_write": false, 00:07:33.340 "abort": true, 00:07:33.340 "seek_hole": false, 00:07:33.340 "seek_data": false, 00:07:33.340 "copy": true, 00:07:33.340 "nvme_iov_md": false 00:07:33.340 }, 00:07:33.340 "memory_domains": [ 00:07:33.340 { 00:07:33.340 "dma_device_id": "system", 00:07:33.340 "dma_device_type": 1 00:07:33.340 }, 00:07:33.340 { 00:07:33.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.340 "dma_device_type": 2 00:07:33.340 } 00:07:33.340 ], 00:07:33.340 "driver_specific": {} 00:07:33.340 } 00:07:33.340 ] 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.340 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.340 "name": "Existed_Raid", 00:07:33.340 "uuid": "7642ea3b-6d69-4224-b1c9-4d52d57391b5", 00:07:33.340 "strip_size_kb": 0, 00:07:33.340 "state": "configuring", 00:07:33.340 "raid_level": "raid1", 00:07:33.340 "superblock": true, 00:07:33.341 "num_base_bdevs": 3, 00:07:33.341 "num_base_bdevs_discovered": 2, 00:07:33.341 "num_base_bdevs_operational": 3, 00:07:33.341 "base_bdevs_list": [ 00:07:33.341 { 00:07:33.341 "name": "BaseBdev1", 00:07:33.341 "uuid": "1efaddc3-ba2e-4ca5-bcfd-63a3e26e0ccf", 00:07:33.341 "is_configured": true, 00:07:33.341 "data_offset": 2048, 00:07:33.341 "data_size": 63488 00:07:33.341 }, 00:07:33.341 { 00:07:33.341 "name": null, 00:07:33.341 "uuid": "f1d8e18b-cb39-4b9c-8e42-429ea3307c02", 00:07:33.341 "is_configured": false, 00:07:33.341 "data_offset": 0, 00:07:33.341 "data_size": 63488 00:07:33.341 }, 00:07:33.341 { 00:07:33.341 "name": "BaseBdev3", 00:07:33.341 "uuid": "ed16b7b7-3f49-4da9-89b8-9c65a9313eb7", 00:07:33.341 "is_configured": true, 00:07:33.341 "data_offset": 2048, 00:07:33.341 "data_size": 63488 00:07:33.341 } 00:07:33.341 ] 00:07:33.341 }' 00:07:33.341 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.341 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.599 [2024-11-26 19:48:24.388830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.599 "name": "Existed_Raid", 00:07:33.599 "uuid": "7642ea3b-6d69-4224-b1c9-4d52d57391b5", 00:07:33.599 "strip_size_kb": 0, 00:07:33.599 "state": "configuring", 00:07:33.599 "raid_level": "raid1", 00:07:33.599 "superblock": true, 00:07:33.599 "num_base_bdevs": 3, 00:07:33.599 "num_base_bdevs_discovered": 1, 00:07:33.599 "num_base_bdevs_operational": 3, 00:07:33.599 "base_bdevs_list": [ 00:07:33.599 { 00:07:33.599 "name": "BaseBdev1", 00:07:33.599 "uuid": "1efaddc3-ba2e-4ca5-bcfd-63a3e26e0ccf", 00:07:33.599 "is_configured": true, 00:07:33.599 "data_offset": 2048, 00:07:33.599 "data_size": 63488 00:07:33.599 }, 00:07:33.599 { 00:07:33.599 "name": null, 00:07:33.599 "uuid": "f1d8e18b-cb39-4b9c-8e42-429ea3307c02", 00:07:33.599 "is_configured": false, 00:07:33.599 "data_offset": 0, 00:07:33.599 "data_size": 63488 00:07:33.599 }, 00:07:33.599 { 00:07:33.599 "name": null, 00:07:33.599 "uuid": "ed16b7b7-3f49-4da9-89b8-9c65a9313eb7", 00:07:33.599 "is_configured": false, 00:07:33.599 "data_offset": 0, 00:07:33.599 "data_size": 63488 00:07:33.599 } 00:07:33.599 ] 00:07:33.599 }' 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.599 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.858 [2024-11-26 19:48:24.756963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.858 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.859 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.859 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.859 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.859 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.859 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.859 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.117 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.117 "name": "Existed_Raid", 00:07:34.117 "uuid": "7642ea3b-6d69-4224-b1c9-4d52d57391b5", 00:07:34.117 "strip_size_kb": 0, 00:07:34.117 "state": "configuring", 00:07:34.117 "raid_level": "raid1", 00:07:34.117 "superblock": true, 00:07:34.117 "num_base_bdevs": 3, 00:07:34.117 "num_base_bdevs_discovered": 2, 00:07:34.117 "num_base_bdevs_operational": 3, 00:07:34.117 "base_bdevs_list": [ 00:07:34.117 { 00:07:34.117 "name": "BaseBdev1", 00:07:34.117 "uuid": "1efaddc3-ba2e-4ca5-bcfd-63a3e26e0ccf", 00:07:34.117 "is_configured": true, 00:07:34.117 "data_offset": 2048, 00:07:34.117 "data_size": 63488 00:07:34.117 }, 00:07:34.117 { 00:07:34.117 "name": null, 00:07:34.117 "uuid": "f1d8e18b-cb39-4b9c-8e42-429ea3307c02", 00:07:34.117 "is_configured": false, 00:07:34.117 "data_offset": 0, 00:07:34.117 "data_size": 63488 00:07:34.117 }, 00:07:34.117 { 00:07:34.117 "name": "BaseBdev3", 00:07:34.117 "uuid": "ed16b7b7-3f49-4da9-89b8-9c65a9313eb7", 00:07:34.117 "is_configured": true, 00:07:34.117 "data_offset": 2048, 00:07:34.117 "data_size": 63488 00:07:34.117 } 00:07:34.117 ] 00:07:34.117 }' 00:07:34.117 19:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.117 19:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.374 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.374 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.374 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.374 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:34.374 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.374 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:34.374 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.375 [2024-11-26 19:48:25.169076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.375 "name": "Existed_Raid", 00:07:34.375 "uuid": "7642ea3b-6d69-4224-b1c9-4d52d57391b5", 00:07:34.375 "strip_size_kb": 0, 00:07:34.375 "state": "configuring", 00:07:34.375 "raid_level": "raid1", 00:07:34.375 "superblock": true, 00:07:34.375 "num_base_bdevs": 3, 00:07:34.375 "num_base_bdevs_discovered": 1, 00:07:34.375 "num_base_bdevs_operational": 3, 00:07:34.375 "base_bdevs_list": [ 00:07:34.375 { 00:07:34.375 "name": null, 00:07:34.375 "uuid": "1efaddc3-ba2e-4ca5-bcfd-63a3e26e0ccf", 00:07:34.375 "is_configured": false, 00:07:34.375 "data_offset": 0, 00:07:34.375 "data_size": 63488 00:07:34.375 }, 00:07:34.375 { 00:07:34.375 "name": null, 00:07:34.375 "uuid": "f1d8e18b-cb39-4b9c-8e42-429ea3307c02", 00:07:34.375 "is_configured": false, 00:07:34.375 "data_offset": 0, 00:07:34.375 "data_size": 63488 00:07:34.375 }, 00:07:34.375 { 00:07:34.375 "name": "BaseBdev3", 00:07:34.375 "uuid": "ed16b7b7-3f49-4da9-89b8-9c65a9313eb7", 00:07:34.375 "is_configured": true, 00:07:34.375 "data_offset": 2048, 00:07:34.375 "data_size": 63488 00:07:34.375 } 00:07:34.375 ] 00:07:34.375 }' 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.375 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.638 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.638 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:34.638 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.638 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.638 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.897 [2024-11-26 19:48:25.580587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.897 "name": "Existed_Raid", 00:07:34.897 "uuid": "7642ea3b-6d69-4224-b1c9-4d52d57391b5", 00:07:34.897 "strip_size_kb": 0, 00:07:34.897 "state": "configuring", 00:07:34.897 "raid_level": "raid1", 00:07:34.897 "superblock": true, 00:07:34.897 "num_base_bdevs": 3, 00:07:34.897 "num_base_bdevs_discovered": 2, 00:07:34.897 "num_base_bdevs_operational": 3, 00:07:34.897 "base_bdevs_list": [ 00:07:34.897 { 00:07:34.897 "name": null, 00:07:34.897 "uuid": "1efaddc3-ba2e-4ca5-bcfd-63a3e26e0ccf", 00:07:34.897 "is_configured": false, 00:07:34.897 "data_offset": 0, 00:07:34.897 "data_size": 63488 00:07:34.897 }, 00:07:34.897 { 00:07:34.897 "name": "BaseBdev2", 00:07:34.897 "uuid": "f1d8e18b-cb39-4b9c-8e42-429ea3307c02", 00:07:34.897 "is_configured": true, 00:07:34.897 "data_offset": 2048, 00:07:34.897 "data_size": 63488 00:07:34.897 }, 00:07:34.897 { 00:07:34.897 "name": "BaseBdev3", 00:07:34.897 "uuid": "ed16b7b7-3f49-4da9-89b8-9c65a9313eb7", 00:07:34.897 "is_configured": true, 00:07:34.897 "data_offset": 2048, 00:07:34.897 "data_size": 63488 00:07:34.897 } 00:07:34.897 ] 00:07:34.897 }' 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.897 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1efaddc3-ba2e-4ca5-bcfd-63a3e26e0ccf 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.157 [2024-11-26 19:48:25.997383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:35.157 [2024-11-26 19:48:25.997574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:35.157 [2024-11-26 19:48:25.997584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:35.157 NewBaseBdev 00:07:35.157 [2024-11-26 19:48:25.997803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:35.157 [2024-11-26 19:48:25.997918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:35.157 [2024-11-26 19:48:25.997927] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:35.157 [2024-11-26 19:48:25.998031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.157 19:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.157 [ 00:07:35.157 { 00:07:35.157 "name": "NewBaseBdev", 00:07:35.157 "aliases": [ 00:07:35.157 "1efaddc3-ba2e-4ca5-bcfd-63a3e26e0ccf" 00:07:35.157 ], 00:07:35.157 "product_name": "Malloc disk", 00:07:35.157 "block_size": 512, 00:07:35.157 "num_blocks": 65536, 00:07:35.157 "uuid": "1efaddc3-ba2e-4ca5-bcfd-63a3e26e0ccf", 00:07:35.157 "assigned_rate_limits": { 00:07:35.157 "rw_ios_per_sec": 0, 00:07:35.157 "rw_mbytes_per_sec": 0, 00:07:35.157 "r_mbytes_per_sec": 0, 00:07:35.157 "w_mbytes_per_sec": 0 00:07:35.157 }, 00:07:35.157 "claimed": true, 00:07:35.157 "claim_type": "exclusive_write", 00:07:35.157 "zoned": false, 00:07:35.157 "supported_io_types": { 00:07:35.157 "read": true, 00:07:35.157 "write": true, 00:07:35.157 "unmap": true, 00:07:35.157 "flush": true, 00:07:35.157 "reset": true, 00:07:35.157 "nvme_admin": false, 00:07:35.157 "nvme_io": false, 00:07:35.157 "nvme_io_md": false, 00:07:35.157 "write_zeroes": true, 00:07:35.157 "zcopy": true, 00:07:35.157 "get_zone_info": false, 00:07:35.157 "zone_management": false, 00:07:35.157 "zone_append": false, 00:07:35.157 "compare": false, 00:07:35.157 "compare_and_write": false, 00:07:35.157 "abort": true, 00:07:35.157 "seek_hole": false, 00:07:35.157 "seek_data": false, 00:07:35.157 "copy": true, 00:07:35.157 "nvme_iov_md": false 00:07:35.157 }, 00:07:35.157 "memory_domains": [ 00:07:35.157 { 00:07:35.157 "dma_device_id": "system", 00:07:35.157 "dma_device_type": 1 00:07:35.157 }, 00:07:35.157 { 00:07:35.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.157 "dma_device_type": 2 00:07:35.157 } 00:07:35.157 ], 00:07:35.157 "driver_specific": {} 00:07:35.157 } 00:07:35.157 ] 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.157 "name": "Existed_Raid", 00:07:35.157 "uuid": "7642ea3b-6d69-4224-b1c9-4d52d57391b5", 00:07:35.157 "strip_size_kb": 0, 00:07:35.157 "state": "online", 00:07:35.157 "raid_level": "raid1", 00:07:35.157 "superblock": true, 00:07:35.157 "num_base_bdevs": 3, 00:07:35.157 "num_base_bdevs_discovered": 3, 00:07:35.157 "num_base_bdevs_operational": 3, 00:07:35.157 "base_bdevs_list": [ 00:07:35.157 { 00:07:35.157 "name": "NewBaseBdev", 00:07:35.157 "uuid": "1efaddc3-ba2e-4ca5-bcfd-63a3e26e0ccf", 00:07:35.157 "is_configured": true, 00:07:35.157 "data_offset": 2048, 00:07:35.157 "data_size": 63488 00:07:35.157 }, 00:07:35.157 { 00:07:35.157 "name": "BaseBdev2", 00:07:35.157 "uuid": "f1d8e18b-cb39-4b9c-8e42-429ea3307c02", 00:07:35.157 "is_configured": true, 00:07:35.157 "data_offset": 2048, 00:07:35.157 "data_size": 63488 00:07:35.157 }, 00:07:35.157 { 00:07:35.157 "name": "BaseBdev3", 00:07:35.157 "uuid": "ed16b7b7-3f49-4da9-89b8-9c65a9313eb7", 00:07:35.157 "is_configured": true, 00:07:35.157 "data_offset": 2048, 00:07:35.157 "data_size": 63488 00:07:35.157 } 00:07:35.157 ] 00:07:35.157 }' 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.157 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.416 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:35.416 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:35.416 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:35.416 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:35.416 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:35.416 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:35.416 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:35.416 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.416 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.416 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:35.416 [2024-11-26 19:48:26.329781] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.416 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:35.674 "name": "Existed_Raid", 00:07:35.674 "aliases": [ 00:07:35.674 "7642ea3b-6d69-4224-b1c9-4d52d57391b5" 00:07:35.674 ], 00:07:35.674 "product_name": "Raid Volume", 00:07:35.674 "block_size": 512, 00:07:35.674 "num_blocks": 63488, 00:07:35.674 "uuid": "7642ea3b-6d69-4224-b1c9-4d52d57391b5", 00:07:35.674 "assigned_rate_limits": { 00:07:35.674 "rw_ios_per_sec": 0, 00:07:35.674 "rw_mbytes_per_sec": 0, 00:07:35.674 "r_mbytes_per_sec": 0, 00:07:35.674 "w_mbytes_per_sec": 0 00:07:35.674 }, 00:07:35.674 "claimed": false, 00:07:35.674 "zoned": false, 00:07:35.674 "supported_io_types": { 00:07:35.674 "read": true, 00:07:35.674 "write": true, 00:07:35.674 "unmap": false, 00:07:35.674 "flush": false, 00:07:35.674 "reset": true, 00:07:35.674 "nvme_admin": false, 00:07:35.674 "nvme_io": false, 00:07:35.674 "nvme_io_md": false, 00:07:35.674 "write_zeroes": true, 00:07:35.674 "zcopy": false, 00:07:35.674 "get_zone_info": false, 00:07:35.674 "zone_management": false, 00:07:35.674 "zone_append": false, 00:07:35.674 "compare": false, 00:07:35.674 "compare_and_write": false, 00:07:35.674 "abort": false, 00:07:35.674 "seek_hole": false, 00:07:35.674 "seek_data": false, 00:07:35.674 "copy": false, 00:07:35.674 "nvme_iov_md": false 00:07:35.674 }, 00:07:35.674 "memory_domains": [ 00:07:35.674 { 00:07:35.674 "dma_device_id": "system", 00:07:35.674 "dma_device_type": 1 00:07:35.674 }, 00:07:35.674 { 00:07:35.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.674 "dma_device_type": 2 00:07:35.674 }, 00:07:35.674 { 00:07:35.674 "dma_device_id": "system", 00:07:35.674 "dma_device_type": 1 00:07:35.674 }, 00:07:35.674 { 00:07:35.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.674 "dma_device_type": 2 00:07:35.674 }, 00:07:35.674 { 00:07:35.674 "dma_device_id": "system", 00:07:35.674 "dma_device_type": 1 00:07:35.674 }, 00:07:35.674 { 00:07:35.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.674 "dma_device_type": 2 00:07:35.674 } 00:07:35.674 ], 00:07:35.674 "driver_specific": { 00:07:35.674 "raid": { 00:07:35.674 "uuid": "7642ea3b-6d69-4224-b1c9-4d52d57391b5", 00:07:35.674 "strip_size_kb": 0, 00:07:35.674 "state": "online", 00:07:35.674 "raid_level": "raid1", 00:07:35.674 "superblock": true, 00:07:35.674 "num_base_bdevs": 3, 00:07:35.674 "num_base_bdevs_discovered": 3, 00:07:35.674 "num_base_bdevs_operational": 3, 00:07:35.674 "base_bdevs_list": [ 00:07:35.674 { 00:07:35.674 "name": "NewBaseBdev", 00:07:35.674 "uuid": "1efaddc3-ba2e-4ca5-bcfd-63a3e26e0ccf", 00:07:35.674 "is_configured": true, 00:07:35.674 "data_offset": 2048, 00:07:35.674 "data_size": 63488 00:07:35.674 }, 00:07:35.674 { 00:07:35.674 "name": "BaseBdev2", 00:07:35.674 "uuid": "f1d8e18b-cb39-4b9c-8e42-429ea3307c02", 00:07:35.674 "is_configured": true, 00:07:35.674 "data_offset": 2048, 00:07:35.674 "data_size": 63488 00:07:35.674 }, 00:07:35.674 { 00:07:35.674 "name": "BaseBdev3", 00:07:35.674 "uuid": "ed16b7b7-3f49-4da9-89b8-9c65a9313eb7", 00:07:35.674 "is_configured": true, 00:07:35.674 "data_offset": 2048, 00:07:35.674 "data_size": 63488 00:07:35.674 } 00:07:35.674 ] 00:07:35.674 } 00:07:35.674 } 00:07:35.674 }' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:35.674 BaseBdev2 00:07:35.674 BaseBdev3' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.674 [2024-11-26 19:48:26.529535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:35.674 [2024-11-26 19:48:26.529662] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.674 [2024-11-26 19:48:26.529751] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.674 [2024-11-26 19:48:26.530010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.674 [2024-11-26 19:48:26.530020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66396 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66396 ']' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66396 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66396 00:07:35.674 killing process with pid 66396 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66396' 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66396 00:07:35.674 [2024-11-26 19:48:26.562212] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.674 19:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66396 00:07:35.932 [2024-11-26 19:48:26.722204] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.496 19:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:36.496 00:07:36.496 real 0m7.635s 00:07:36.496 user 0m12.197s 00:07:36.496 sys 0m1.282s 00:07:36.496 19:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.496 19:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.496 ************************************ 00:07:36.496 END TEST raid_state_function_test_sb 00:07:36.496 ************************************ 00:07:36.496 19:48:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:07:36.496 19:48:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:36.496 19:48:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.496 19:48:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.496 ************************************ 00:07:36.496 START TEST raid_superblock_test 00:07:36.496 ************************************ 00:07:36.496 19:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:07:36.496 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:36.496 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:07:36.496 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:36.496 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:36.496 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:36.496 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:36.496 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:36.496 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:36.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66984 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66984 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66984 ']' 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.497 19:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:36.811 [2024-11-26 19:48:27.472535] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:36.811 [2024-11-26 19:48:27.472887] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66984 ] 00:07:36.811 [2024-11-26 19:48:27.633615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.069 [2024-11-26 19:48:27.753114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.069 [2024-11-26 19:48:27.900141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.069 [2024-11-26 19:48:27.900397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.635 malloc1 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.635 [2024-11-26 19:48:28.331558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:37.635 [2024-11-26 19:48:28.331813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.635 [2024-11-26 19:48:28.331848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:37.635 [2024-11-26 19:48:28.331864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.635 [2024-11-26 19:48:28.334281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.635 [2024-11-26 19:48:28.334323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:37.635 pt1 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.635 malloc2 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.635 [2024-11-26 19:48:28.369921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:37.635 [2024-11-26 19:48:28.369987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.635 [2024-11-26 19:48:28.370014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:37.635 [2024-11-26 19:48:28.370023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.635 [2024-11-26 19:48:28.372410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.635 [2024-11-26 19:48:28.372468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:37.635 pt2 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:37.635 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.636 malloc3 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.636 [2024-11-26 19:48:28.420603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:37.636 [2024-11-26 19:48:28.420682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.636 [2024-11-26 19:48:28.420708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:37.636 [2024-11-26 19:48:28.420718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.636 [2024-11-26 19:48:28.423134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.636 [2024-11-26 19:48:28.423172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:37.636 pt3 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.636 [2024-11-26 19:48:28.428646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:37.636 [2024-11-26 19:48:28.430733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:37.636 [2024-11-26 19:48:28.430803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:37.636 [2024-11-26 19:48:28.430994] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:37.636 [2024-11-26 19:48:28.431012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:37.636 [2024-11-26 19:48:28.431301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:37.636 [2024-11-26 19:48:28.431490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:37.636 [2024-11-26 19:48:28.431501] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:37.636 [2024-11-26 19:48:28.431666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.636 "name": "raid_bdev1", 00:07:37.636 "uuid": "44c8f14b-db05-4b27-9728-1e7550b79958", 00:07:37.636 "strip_size_kb": 0, 00:07:37.636 "state": "online", 00:07:37.636 "raid_level": "raid1", 00:07:37.636 "superblock": true, 00:07:37.636 "num_base_bdevs": 3, 00:07:37.636 "num_base_bdevs_discovered": 3, 00:07:37.636 "num_base_bdevs_operational": 3, 00:07:37.636 "base_bdevs_list": [ 00:07:37.636 { 00:07:37.636 "name": "pt1", 00:07:37.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.636 "is_configured": true, 00:07:37.636 "data_offset": 2048, 00:07:37.636 "data_size": 63488 00:07:37.636 }, 00:07:37.636 { 00:07:37.636 "name": "pt2", 00:07:37.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.636 "is_configured": true, 00:07:37.636 "data_offset": 2048, 00:07:37.636 "data_size": 63488 00:07:37.636 }, 00:07:37.636 { 00:07:37.636 "name": "pt3", 00:07:37.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:37.636 "is_configured": true, 00:07:37.636 "data_offset": 2048, 00:07:37.636 "data_size": 63488 00:07:37.636 } 00:07:37.636 ] 00:07:37.636 }' 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.636 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.894 [2024-11-26 19:48:28.745037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:37.894 "name": "raid_bdev1", 00:07:37.894 "aliases": [ 00:07:37.894 "44c8f14b-db05-4b27-9728-1e7550b79958" 00:07:37.894 ], 00:07:37.894 "product_name": "Raid Volume", 00:07:37.894 "block_size": 512, 00:07:37.894 "num_blocks": 63488, 00:07:37.894 "uuid": "44c8f14b-db05-4b27-9728-1e7550b79958", 00:07:37.894 "assigned_rate_limits": { 00:07:37.894 "rw_ios_per_sec": 0, 00:07:37.894 "rw_mbytes_per_sec": 0, 00:07:37.894 "r_mbytes_per_sec": 0, 00:07:37.894 "w_mbytes_per_sec": 0 00:07:37.894 }, 00:07:37.894 "claimed": false, 00:07:37.894 "zoned": false, 00:07:37.894 "supported_io_types": { 00:07:37.894 "read": true, 00:07:37.894 "write": true, 00:07:37.894 "unmap": false, 00:07:37.894 "flush": false, 00:07:37.894 "reset": true, 00:07:37.894 "nvme_admin": false, 00:07:37.894 "nvme_io": false, 00:07:37.894 "nvme_io_md": false, 00:07:37.894 "write_zeroes": true, 00:07:37.894 "zcopy": false, 00:07:37.894 "get_zone_info": false, 00:07:37.894 "zone_management": false, 00:07:37.894 "zone_append": false, 00:07:37.894 "compare": false, 00:07:37.894 "compare_and_write": false, 00:07:37.894 "abort": false, 00:07:37.894 "seek_hole": false, 00:07:37.894 "seek_data": false, 00:07:37.894 "copy": false, 00:07:37.894 "nvme_iov_md": false 00:07:37.894 }, 00:07:37.894 "memory_domains": [ 00:07:37.894 { 00:07:37.894 "dma_device_id": "system", 00:07:37.894 "dma_device_type": 1 00:07:37.894 }, 00:07:37.894 { 00:07:37.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.894 "dma_device_type": 2 00:07:37.894 }, 00:07:37.894 { 00:07:37.894 "dma_device_id": "system", 00:07:37.894 "dma_device_type": 1 00:07:37.894 }, 00:07:37.894 { 00:07:37.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.894 "dma_device_type": 2 00:07:37.894 }, 00:07:37.894 { 00:07:37.894 "dma_device_id": "system", 00:07:37.894 "dma_device_type": 1 00:07:37.894 }, 00:07:37.894 { 00:07:37.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.894 "dma_device_type": 2 00:07:37.894 } 00:07:37.894 ], 00:07:37.894 "driver_specific": { 00:07:37.894 "raid": { 00:07:37.894 "uuid": "44c8f14b-db05-4b27-9728-1e7550b79958", 00:07:37.894 "strip_size_kb": 0, 00:07:37.894 "state": "online", 00:07:37.894 "raid_level": "raid1", 00:07:37.894 "superblock": true, 00:07:37.894 "num_base_bdevs": 3, 00:07:37.894 "num_base_bdevs_discovered": 3, 00:07:37.894 "num_base_bdevs_operational": 3, 00:07:37.894 "base_bdevs_list": [ 00:07:37.894 { 00:07:37.894 "name": "pt1", 00:07:37.894 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.894 "is_configured": true, 00:07:37.894 "data_offset": 2048, 00:07:37.894 "data_size": 63488 00:07:37.894 }, 00:07:37.894 { 00:07:37.894 "name": "pt2", 00:07:37.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.894 "is_configured": true, 00:07:37.894 "data_offset": 2048, 00:07:37.894 "data_size": 63488 00:07:37.894 }, 00:07:37.894 { 00:07:37.894 "name": "pt3", 00:07:37.894 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:37.894 "is_configured": true, 00:07:37.894 "data_offset": 2048, 00:07:37.894 "data_size": 63488 00:07:37.894 } 00:07:37.894 ] 00:07:37.894 } 00:07:37.894 } 00:07:37.894 }' 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:37.894 pt2 00:07:37.894 pt3' 00:07:37.894 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:38.153 [2024-11-26 19:48:28.941047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=44c8f14b-db05-4b27-9728-1e7550b79958 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 44c8f14b-db05-4b27-9728-1e7550b79958 ']' 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.153 [2024-11-26 19:48:28.972738] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:38.153 [2024-11-26 19:48:28.972774] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.153 [2024-11-26 19:48:28.972866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.153 [2024-11-26 19:48:28.972952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.153 [2024-11-26 19:48:28.972963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:38.153 19:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.153 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:38.153 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:38.153 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:38.153 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:38.153 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.154 [2024-11-26 19:48:29.076813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:38.154 [2024-11-26 19:48:29.078872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:38.154 [2024-11-26 19:48:29.078941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:38.154 [2024-11-26 19:48:29.079000] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:38.154 [2024-11-26 19:48:29.079061] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:38.154 [2024-11-26 19:48:29.079081] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:07:38.154 [2024-11-26 19:48:29.079097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:38.154 [2024-11-26 19:48:29.079107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:38.154 request: 00:07:38.154 { 00:07:38.154 "name": "raid_bdev1", 00:07:38.154 "raid_level": "raid1", 00:07:38.154 "base_bdevs": [ 00:07:38.154 "malloc1", 00:07:38.154 "malloc2", 00:07:38.154 "malloc3" 00:07:38.154 ], 00:07:38.154 "superblock": false, 00:07:38.154 "method": "bdev_raid_create", 00:07:38.154 "req_id": 1 00:07:38.154 } 00:07:38.154 Got JSON-RPC error response 00:07:38.154 response: 00:07:38.154 { 00:07:38.154 "code": -17, 00:07:38.154 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:38.154 } 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.154 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.419 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.420 [2024-11-26 19:48:29.116767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:38.420 [2024-11-26 19:48:29.116836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.420 [2024-11-26 19:48:29.116864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:38.420 [2024-11-26 19:48:29.116873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.420 [2024-11-26 19:48:29.119271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.420 [2024-11-26 19:48:29.119425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:38.420 [2024-11-26 19:48:29.119538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:38.420 [2024-11-26 19:48:29.119599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:38.420 pt1 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.420 "name": "raid_bdev1", 00:07:38.420 "uuid": "44c8f14b-db05-4b27-9728-1e7550b79958", 00:07:38.420 "strip_size_kb": 0, 00:07:38.420 "state": "configuring", 00:07:38.420 "raid_level": "raid1", 00:07:38.420 "superblock": true, 00:07:38.420 "num_base_bdevs": 3, 00:07:38.420 "num_base_bdevs_discovered": 1, 00:07:38.420 "num_base_bdevs_operational": 3, 00:07:38.420 "base_bdevs_list": [ 00:07:38.420 { 00:07:38.420 "name": "pt1", 00:07:38.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.420 "is_configured": true, 00:07:38.420 "data_offset": 2048, 00:07:38.420 "data_size": 63488 00:07:38.420 }, 00:07:38.420 { 00:07:38.420 "name": null, 00:07:38.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.420 "is_configured": false, 00:07:38.420 "data_offset": 2048, 00:07:38.420 "data_size": 63488 00:07:38.420 }, 00:07:38.420 { 00:07:38.420 "name": null, 00:07:38.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:38.420 "is_configured": false, 00:07:38.420 "data_offset": 2048, 00:07:38.420 "data_size": 63488 00:07:38.420 } 00:07:38.420 ] 00:07:38.420 }' 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.420 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.679 [2024-11-26 19:48:29.440851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:38.679 [2024-11-26 19:48:29.440923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.679 [2024-11-26 19:48:29.440947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:07:38.679 [2024-11-26 19:48:29.440957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.679 [2024-11-26 19:48:29.441437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.679 [2024-11-26 19:48:29.441463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:38.679 [2024-11-26 19:48:29.441549] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:38.679 [2024-11-26 19:48:29.441570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:38.679 pt2 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.679 [2024-11-26 19:48:29.448867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.679 "name": "raid_bdev1", 00:07:38.679 "uuid": "44c8f14b-db05-4b27-9728-1e7550b79958", 00:07:38.679 "strip_size_kb": 0, 00:07:38.679 "state": "configuring", 00:07:38.679 "raid_level": "raid1", 00:07:38.679 "superblock": true, 00:07:38.679 "num_base_bdevs": 3, 00:07:38.679 "num_base_bdevs_discovered": 1, 00:07:38.679 "num_base_bdevs_operational": 3, 00:07:38.679 "base_bdevs_list": [ 00:07:38.679 { 00:07:38.679 "name": "pt1", 00:07:38.679 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.679 "is_configured": true, 00:07:38.679 "data_offset": 2048, 00:07:38.679 "data_size": 63488 00:07:38.679 }, 00:07:38.679 { 00:07:38.679 "name": null, 00:07:38.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.679 "is_configured": false, 00:07:38.679 "data_offset": 0, 00:07:38.679 "data_size": 63488 00:07:38.679 }, 00:07:38.679 { 00:07:38.679 "name": null, 00:07:38.679 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:38.679 "is_configured": false, 00:07:38.679 "data_offset": 2048, 00:07:38.679 "data_size": 63488 00:07:38.679 } 00:07:38.679 ] 00:07:38.679 }' 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.679 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.937 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:38.937 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:38.937 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:38.937 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.937 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.937 [2024-11-26 19:48:29.768903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:38.938 [2024-11-26 19:48:29.768981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.938 [2024-11-26 19:48:29.769000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:07:38.938 [2024-11-26 19:48:29.769012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.938 [2024-11-26 19:48:29.769499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.938 [2024-11-26 19:48:29.769515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:38.938 [2024-11-26 19:48:29.769592] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:38.938 [2024-11-26 19:48:29.769623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:38.938 pt2 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.938 [2024-11-26 19:48:29.776904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:38.938 [2024-11-26 19:48:29.776957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.938 [2024-11-26 19:48:29.776973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:07:38.938 [2024-11-26 19:48:29.776983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.938 [2024-11-26 19:48:29.777435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.938 [2024-11-26 19:48:29.777459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:38.938 [2024-11-26 19:48:29.777532] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:38.938 [2024-11-26 19:48:29.777555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:38.938 [2024-11-26 19:48:29.777685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:38.938 [2024-11-26 19:48:29.777703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:38.938 [2024-11-26 19:48:29.777951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:38.938 [2024-11-26 19:48:29.778095] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:38.938 [2024-11-26 19:48:29.778103] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:38.938 [2024-11-26 19:48:29.778236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.938 pt3 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.938 "name": "raid_bdev1", 00:07:38.938 "uuid": "44c8f14b-db05-4b27-9728-1e7550b79958", 00:07:38.938 "strip_size_kb": 0, 00:07:38.938 "state": "online", 00:07:38.938 "raid_level": "raid1", 00:07:38.938 "superblock": true, 00:07:38.938 "num_base_bdevs": 3, 00:07:38.938 "num_base_bdevs_discovered": 3, 00:07:38.938 "num_base_bdevs_operational": 3, 00:07:38.938 "base_bdevs_list": [ 00:07:38.938 { 00:07:38.938 "name": "pt1", 00:07:38.938 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.938 "is_configured": true, 00:07:38.938 "data_offset": 2048, 00:07:38.938 "data_size": 63488 00:07:38.938 }, 00:07:38.938 { 00:07:38.938 "name": "pt2", 00:07:38.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.938 "is_configured": true, 00:07:38.938 "data_offset": 2048, 00:07:38.938 "data_size": 63488 00:07:38.938 }, 00:07:38.938 { 00:07:38.938 "name": "pt3", 00:07:38.938 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:38.938 "is_configured": true, 00:07:38.938 "data_offset": 2048, 00:07:38.938 "data_size": 63488 00:07:38.938 } 00:07:38.938 ] 00:07:38.938 }' 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.938 19:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.196 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:39.196 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:39.196 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:39.196 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:39.196 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:39.196 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:39.196 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:39.196 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.196 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.196 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:39.196 [2024-11-26 19:48:30.121364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.454 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.454 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:39.454 "name": "raid_bdev1", 00:07:39.454 "aliases": [ 00:07:39.454 "44c8f14b-db05-4b27-9728-1e7550b79958" 00:07:39.454 ], 00:07:39.454 "product_name": "Raid Volume", 00:07:39.454 "block_size": 512, 00:07:39.454 "num_blocks": 63488, 00:07:39.454 "uuid": "44c8f14b-db05-4b27-9728-1e7550b79958", 00:07:39.454 "assigned_rate_limits": { 00:07:39.454 "rw_ios_per_sec": 0, 00:07:39.454 "rw_mbytes_per_sec": 0, 00:07:39.454 "r_mbytes_per_sec": 0, 00:07:39.454 "w_mbytes_per_sec": 0 00:07:39.454 }, 00:07:39.454 "claimed": false, 00:07:39.454 "zoned": false, 00:07:39.454 "supported_io_types": { 00:07:39.454 "read": true, 00:07:39.454 "write": true, 00:07:39.454 "unmap": false, 00:07:39.454 "flush": false, 00:07:39.454 "reset": true, 00:07:39.454 "nvme_admin": false, 00:07:39.454 "nvme_io": false, 00:07:39.454 "nvme_io_md": false, 00:07:39.455 "write_zeroes": true, 00:07:39.455 "zcopy": false, 00:07:39.455 "get_zone_info": false, 00:07:39.455 "zone_management": false, 00:07:39.455 "zone_append": false, 00:07:39.455 "compare": false, 00:07:39.455 "compare_and_write": false, 00:07:39.455 "abort": false, 00:07:39.455 "seek_hole": false, 00:07:39.455 "seek_data": false, 00:07:39.455 "copy": false, 00:07:39.455 "nvme_iov_md": false 00:07:39.455 }, 00:07:39.455 "memory_domains": [ 00:07:39.455 { 00:07:39.455 "dma_device_id": "system", 00:07:39.455 "dma_device_type": 1 00:07:39.455 }, 00:07:39.455 { 00:07:39.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.455 "dma_device_type": 2 00:07:39.455 }, 00:07:39.455 { 00:07:39.455 "dma_device_id": "system", 00:07:39.455 "dma_device_type": 1 00:07:39.455 }, 00:07:39.455 { 00:07:39.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.455 "dma_device_type": 2 00:07:39.455 }, 00:07:39.455 { 00:07:39.455 "dma_device_id": "system", 00:07:39.455 "dma_device_type": 1 00:07:39.455 }, 00:07:39.455 { 00:07:39.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.455 "dma_device_type": 2 00:07:39.455 } 00:07:39.455 ], 00:07:39.455 "driver_specific": { 00:07:39.455 "raid": { 00:07:39.455 "uuid": "44c8f14b-db05-4b27-9728-1e7550b79958", 00:07:39.455 "strip_size_kb": 0, 00:07:39.455 "state": "online", 00:07:39.455 "raid_level": "raid1", 00:07:39.455 "superblock": true, 00:07:39.455 "num_base_bdevs": 3, 00:07:39.455 "num_base_bdevs_discovered": 3, 00:07:39.455 "num_base_bdevs_operational": 3, 00:07:39.455 "base_bdevs_list": [ 00:07:39.455 { 00:07:39.455 "name": "pt1", 00:07:39.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:39.455 "is_configured": true, 00:07:39.455 "data_offset": 2048, 00:07:39.455 "data_size": 63488 00:07:39.455 }, 00:07:39.455 { 00:07:39.455 "name": "pt2", 00:07:39.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:39.455 "is_configured": true, 00:07:39.455 "data_offset": 2048, 00:07:39.455 "data_size": 63488 00:07:39.455 }, 00:07:39.455 { 00:07:39.455 "name": "pt3", 00:07:39.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:39.455 "is_configured": true, 00:07:39.455 "data_offset": 2048, 00:07:39.455 "data_size": 63488 00:07:39.455 } 00:07:39.455 ] 00:07:39.455 } 00:07:39.455 } 00:07:39.455 }' 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:39.455 pt2 00:07:39.455 pt3' 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.455 [2024-11-26 19:48:30.349373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 44c8f14b-db05-4b27-9728-1e7550b79958 '!=' 44c8f14b-db05-4b27-9728-1e7550b79958 ']' 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.455 [2024-11-26 19:48:30.381094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.455 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.456 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:39.456 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:39.456 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.456 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.456 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.456 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.456 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.714 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.714 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.714 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.714 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.714 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.714 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.714 "name": "raid_bdev1", 00:07:39.714 "uuid": "44c8f14b-db05-4b27-9728-1e7550b79958", 00:07:39.714 "strip_size_kb": 0, 00:07:39.714 "state": "online", 00:07:39.714 "raid_level": "raid1", 00:07:39.714 "superblock": true, 00:07:39.714 "num_base_bdevs": 3, 00:07:39.714 "num_base_bdevs_discovered": 2, 00:07:39.714 "num_base_bdevs_operational": 2, 00:07:39.714 "base_bdevs_list": [ 00:07:39.714 { 00:07:39.714 "name": null, 00:07:39.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.714 "is_configured": false, 00:07:39.714 "data_offset": 0, 00:07:39.714 "data_size": 63488 00:07:39.714 }, 00:07:39.714 { 00:07:39.714 "name": "pt2", 00:07:39.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:39.714 "is_configured": true, 00:07:39.714 "data_offset": 2048, 00:07:39.714 "data_size": 63488 00:07:39.714 }, 00:07:39.714 { 00:07:39.714 "name": "pt3", 00:07:39.714 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:39.714 "is_configured": true, 00:07:39.714 "data_offset": 2048, 00:07:39.714 "data_size": 63488 00:07:39.714 } 00:07:39.714 ] 00:07:39.714 }' 00:07:39.714 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.714 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.021 [2024-11-26 19:48:30.709119] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:40.021 [2024-11-26 19:48:30.709151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.021 [2024-11-26 19:48:30.709235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.021 [2024-11-26 19:48:30.709300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.021 [2024-11-26 19:48:30.709314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.021 [2024-11-26 19:48:30.769094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:40.021 [2024-11-26 19:48:30.769152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.021 [2024-11-26 19:48:30.769169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:07:40.021 [2024-11-26 19:48:30.769179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.021 [2024-11-26 19:48:30.771532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.021 [2024-11-26 19:48:30.771568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:40.021 [2024-11-26 19:48:30.771645] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:40.021 [2024-11-26 19:48:30.771695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:40.021 pt2 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.021 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.022 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.022 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.022 "name": "raid_bdev1", 00:07:40.022 "uuid": "44c8f14b-db05-4b27-9728-1e7550b79958", 00:07:40.022 "strip_size_kb": 0, 00:07:40.022 "state": "configuring", 00:07:40.022 "raid_level": "raid1", 00:07:40.022 "superblock": true, 00:07:40.022 "num_base_bdevs": 3, 00:07:40.022 "num_base_bdevs_discovered": 1, 00:07:40.022 "num_base_bdevs_operational": 2, 00:07:40.022 "base_bdevs_list": [ 00:07:40.022 { 00:07:40.022 "name": null, 00:07:40.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.022 "is_configured": false, 00:07:40.022 "data_offset": 2048, 00:07:40.022 "data_size": 63488 00:07:40.022 }, 00:07:40.022 { 00:07:40.022 "name": "pt2", 00:07:40.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.022 "is_configured": true, 00:07:40.022 "data_offset": 2048, 00:07:40.022 "data_size": 63488 00:07:40.022 }, 00:07:40.022 { 00:07:40.022 "name": null, 00:07:40.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:40.022 "is_configured": false, 00:07:40.022 "data_offset": 2048, 00:07:40.022 "data_size": 63488 00:07:40.022 } 00:07:40.022 ] 00:07:40.022 }' 00:07:40.022 19:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.022 19:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.280 [2024-11-26 19:48:31.101206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:40.280 [2024-11-26 19:48:31.101399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.280 [2024-11-26 19:48:31.101427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:07:40.280 [2024-11-26 19:48:31.101438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.280 [2024-11-26 19:48:31.101920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.280 [2024-11-26 19:48:31.101943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:40.280 [2024-11-26 19:48:31.102034] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:40.280 [2024-11-26 19:48:31.102060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:40.280 [2024-11-26 19:48:31.102175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:40.280 [2024-11-26 19:48:31.102193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:40.280 [2024-11-26 19:48:31.102472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:40.280 [2024-11-26 19:48:31.102627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:40.280 [2024-11-26 19:48:31.102635] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:40.280 [2024-11-26 19:48:31.102771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.280 pt3 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.280 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.281 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.281 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.281 "name": "raid_bdev1", 00:07:40.281 "uuid": "44c8f14b-db05-4b27-9728-1e7550b79958", 00:07:40.281 "strip_size_kb": 0, 00:07:40.281 "state": "online", 00:07:40.281 "raid_level": "raid1", 00:07:40.281 "superblock": true, 00:07:40.281 "num_base_bdevs": 3, 00:07:40.281 "num_base_bdevs_discovered": 2, 00:07:40.281 "num_base_bdevs_operational": 2, 00:07:40.281 "base_bdevs_list": [ 00:07:40.281 { 00:07:40.281 "name": null, 00:07:40.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.281 "is_configured": false, 00:07:40.281 "data_offset": 2048, 00:07:40.281 "data_size": 63488 00:07:40.281 }, 00:07:40.281 { 00:07:40.281 "name": "pt2", 00:07:40.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.281 "is_configured": true, 00:07:40.281 "data_offset": 2048, 00:07:40.281 "data_size": 63488 00:07:40.281 }, 00:07:40.281 { 00:07:40.281 "name": "pt3", 00:07:40.281 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:40.281 "is_configured": true, 00:07:40.281 "data_offset": 2048, 00:07:40.281 "data_size": 63488 00:07:40.281 } 00:07:40.281 ] 00:07:40.281 }' 00:07:40.281 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.281 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.538 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:40.538 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.538 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.538 [2024-11-26 19:48:31.413264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:40.538 [2024-11-26 19:48:31.413415] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.538 [2024-11-26 19:48:31.413506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.538 [2024-11-26 19:48:31.413575] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.538 [2024-11-26 19:48:31.413585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:40.538 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.539 [2024-11-26 19:48:31.465277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:40.539 [2024-11-26 19:48:31.465419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.539 [2024-11-26 19:48:31.465479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:07:40.539 [2024-11-26 19:48:31.465519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.539 [2024-11-26 19:48:31.467573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.539 [2024-11-26 19:48:31.467667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:40.539 [2024-11-26 19:48:31.467783] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:40.539 [2024-11-26 19:48:31.467866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:40.539 [2024-11-26 19:48:31.468021] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:40.539 [2024-11-26 19:48:31.468150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:40.539 [2024-11-26 19:48:31.468208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:40.539 [2024-11-26 19:48:31.468281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:40.539 pt1 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.539 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.796 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.796 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.796 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.796 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.796 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.796 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.796 "name": "raid_bdev1", 00:07:40.796 "uuid": "44c8f14b-db05-4b27-9728-1e7550b79958", 00:07:40.796 "strip_size_kb": 0, 00:07:40.796 "state": "configuring", 00:07:40.796 "raid_level": "raid1", 00:07:40.796 "superblock": true, 00:07:40.796 "num_base_bdevs": 3, 00:07:40.796 "num_base_bdevs_discovered": 1, 00:07:40.796 "num_base_bdevs_operational": 2, 00:07:40.796 "base_bdevs_list": [ 00:07:40.796 { 00:07:40.796 "name": null, 00:07:40.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.796 "is_configured": false, 00:07:40.796 "data_offset": 2048, 00:07:40.796 "data_size": 63488 00:07:40.796 }, 00:07:40.796 { 00:07:40.796 "name": "pt2", 00:07:40.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.796 "is_configured": true, 00:07:40.796 "data_offset": 2048, 00:07:40.796 "data_size": 63488 00:07:40.796 }, 00:07:40.796 { 00:07:40.796 "name": null, 00:07:40.796 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:40.796 "is_configured": false, 00:07:40.796 "data_offset": 2048, 00:07:40.796 "data_size": 63488 00:07:40.796 } 00:07:40.796 ] 00:07:40.796 }' 00:07:40.796 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.796 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.053 [2024-11-26 19:48:31.833380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:41.053 [2024-11-26 19:48:31.833451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.053 [2024-11-26 19:48:31.833471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:07:41.053 [2024-11-26 19:48:31.833479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.053 [2024-11-26 19:48:31.833909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.053 [2024-11-26 19:48:31.833921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:41.053 [2024-11-26 19:48:31.833996] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:41.053 [2024-11-26 19:48:31.834015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:41.053 [2024-11-26 19:48:31.834122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:41.053 [2024-11-26 19:48:31.834130] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:41.053 [2024-11-26 19:48:31.834364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:41.053 [2024-11-26 19:48:31.834490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:41.053 [2024-11-26 19:48:31.834505] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:41.053 [2024-11-26 19:48:31.834620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.053 pt3 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.053 "name": "raid_bdev1", 00:07:41.053 "uuid": "44c8f14b-db05-4b27-9728-1e7550b79958", 00:07:41.053 "strip_size_kb": 0, 00:07:41.053 "state": "online", 00:07:41.053 "raid_level": "raid1", 00:07:41.053 "superblock": true, 00:07:41.053 "num_base_bdevs": 3, 00:07:41.053 "num_base_bdevs_discovered": 2, 00:07:41.053 "num_base_bdevs_operational": 2, 00:07:41.053 "base_bdevs_list": [ 00:07:41.053 { 00:07:41.053 "name": null, 00:07:41.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.053 "is_configured": false, 00:07:41.053 "data_offset": 2048, 00:07:41.053 "data_size": 63488 00:07:41.053 }, 00:07:41.053 { 00:07:41.053 "name": "pt2", 00:07:41.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.053 "is_configured": true, 00:07:41.053 "data_offset": 2048, 00:07:41.053 "data_size": 63488 00:07:41.053 }, 00:07:41.053 { 00:07:41.053 "name": "pt3", 00:07:41.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:41.053 "is_configured": true, 00:07:41.053 "data_offset": 2048, 00:07:41.053 "data_size": 63488 00:07:41.053 } 00:07:41.053 ] 00:07:41.053 }' 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.053 19:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.311 19:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:41.311 19:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:41.311 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.311 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.311 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.311 19:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:41.311 19:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:41.311 19:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:41.311 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.311 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.311 [2024-11-26 19:48:32.245713] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.567 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.567 19:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 44c8f14b-db05-4b27-9728-1e7550b79958 '!=' 44c8f14b-db05-4b27-9728-1e7550b79958 ']' 00:07:41.567 19:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66984 00:07:41.567 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66984 ']' 00:07:41.567 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66984 00:07:41.567 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:41.567 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.567 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66984 00:07:41.567 killing process with pid 66984 00:07:41.567 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.567 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.567 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66984' 00:07:41.567 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66984 00:07:41.567 [2024-11-26 19:48:32.299279] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.567 19:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66984 00:07:41.567 [2024-11-26 19:48:32.299387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.568 [2024-11-26 19:48:32.299447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.568 [2024-11-26 19:48:32.299458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:41.568 [2024-11-26 19:48:32.459650] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.498 19:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:42.498 00:07:42.498 real 0m5.684s 00:07:42.498 user 0m8.941s 00:07:42.498 sys 0m0.980s 00:07:42.498 19:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.498 ************************************ 00:07:42.498 END TEST raid_superblock_test 00:07:42.498 ************************************ 00:07:42.498 19:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.498 19:48:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:07:42.498 19:48:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:42.498 19:48:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.499 19:48:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.499 ************************************ 00:07:42.499 START TEST raid_read_error_test 00:07:42.499 ************************************ 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Kr12Kal5bM 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67408 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67408 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67408 ']' 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.499 19:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.499 [2024-11-26 19:48:33.204365] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:42.499 [2024-11-26 19:48:33.204514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67408 ] 00:07:42.499 [2024-11-26 19:48:33.363321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.755 [2024-11-26 19:48:33.468966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.755 [2024-11-26 19:48:33.592111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.755 [2024-11-26 19:48:33.592181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.321 BaseBdev1_malloc 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.321 true 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.321 [2024-11-26 19:48:34.245545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:43.321 [2024-11-26 19:48:34.245730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.321 [2024-11-26 19:48:34.245757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:43.321 [2024-11-26 19:48:34.245768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.321 [2024-11-26 19:48:34.247807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.321 [2024-11-26 19:48:34.247842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:43.321 BaseBdev1 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.321 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.597 BaseBdev2_malloc 00:07:43.597 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.597 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:43.597 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.597 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.597 true 00:07:43.597 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.597 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:43.597 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.597 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.597 [2024-11-26 19:48:34.287622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:43.597 [2024-11-26 19:48:34.287679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.597 [2024-11-26 19:48:34.287695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:43.597 [2024-11-26 19:48:34.287705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.597 [2024-11-26 19:48:34.289713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.597 [2024-11-26 19:48:34.289749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:43.597 BaseBdev2 00:07:43.597 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.597 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.597 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:43.597 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.597 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.598 BaseBdev3_malloc 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.598 true 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.598 [2024-11-26 19:48:34.343705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:43.598 [2024-11-26 19:48:34.343898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.598 [2024-11-26 19:48:34.343923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:43.598 [2024-11-26 19:48:34.343933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.598 [2024-11-26 19:48:34.345956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.598 [2024-11-26 19:48:34.345994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:43.598 BaseBdev3 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.598 [2024-11-26 19:48:34.351774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.598 [2024-11-26 19:48:34.353485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.598 [2024-11-26 19:48:34.353553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:43.598 [2024-11-26 19:48:34.353746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:43.598 [2024-11-26 19:48:34.353760] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:43.598 [2024-11-26 19:48:34.354023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:07:43.598 [2024-11-26 19:48:34.354170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:43.598 [2024-11-26 19:48:34.354179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:43.598 [2024-11-26 19:48:34.354321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.598 "name": "raid_bdev1", 00:07:43.598 "uuid": "18d5a1a1-7d93-4fc6-a2b0-ad018017b997", 00:07:43.598 "strip_size_kb": 0, 00:07:43.598 "state": "online", 00:07:43.598 "raid_level": "raid1", 00:07:43.598 "superblock": true, 00:07:43.598 "num_base_bdevs": 3, 00:07:43.598 "num_base_bdevs_discovered": 3, 00:07:43.598 "num_base_bdevs_operational": 3, 00:07:43.598 "base_bdevs_list": [ 00:07:43.598 { 00:07:43.598 "name": "BaseBdev1", 00:07:43.598 "uuid": "75ca3a49-96e8-5ba9-8049-3e7fd7dae760", 00:07:43.598 "is_configured": true, 00:07:43.598 "data_offset": 2048, 00:07:43.598 "data_size": 63488 00:07:43.598 }, 00:07:43.598 { 00:07:43.598 "name": "BaseBdev2", 00:07:43.598 "uuid": "d40b948f-cad8-56b9-b7e2-3b2e91c37a22", 00:07:43.598 "is_configured": true, 00:07:43.598 "data_offset": 2048, 00:07:43.598 "data_size": 63488 00:07:43.598 }, 00:07:43.598 { 00:07:43.598 "name": "BaseBdev3", 00:07:43.598 "uuid": "e142f662-ce7a-506f-9081-fe4cb074b05d", 00:07:43.598 "is_configured": true, 00:07:43.598 "data_offset": 2048, 00:07:43.598 "data_size": 63488 00:07:43.598 } 00:07:43.598 ] 00:07:43.598 }' 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.598 19:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.855 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:43.855 19:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:44.113 [2024-11-26 19:48:34.820774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.047 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.047 "name": "raid_bdev1", 00:07:45.047 "uuid": "18d5a1a1-7d93-4fc6-a2b0-ad018017b997", 00:07:45.047 "strip_size_kb": 0, 00:07:45.047 "state": "online", 00:07:45.047 "raid_level": "raid1", 00:07:45.047 "superblock": true, 00:07:45.047 "num_base_bdevs": 3, 00:07:45.047 "num_base_bdevs_discovered": 3, 00:07:45.047 "num_base_bdevs_operational": 3, 00:07:45.047 "base_bdevs_list": [ 00:07:45.047 { 00:07:45.047 "name": "BaseBdev1", 00:07:45.047 "uuid": "75ca3a49-96e8-5ba9-8049-3e7fd7dae760", 00:07:45.047 "is_configured": true, 00:07:45.047 "data_offset": 2048, 00:07:45.047 "data_size": 63488 00:07:45.047 }, 00:07:45.047 { 00:07:45.047 "name": "BaseBdev2", 00:07:45.048 "uuid": "d40b948f-cad8-56b9-b7e2-3b2e91c37a22", 00:07:45.048 "is_configured": true, 00:07:45.048 "data_offset": 2048, 00:07:45.048 "data_size": 63488 00:07:45.048 }, 00:07:45.048 { 00:07:45.048 "name": "BaseBdev3", 00:07:45.048 "uuid": "e142f662-ce7a-506f-9081-fe4cb074b05d", 00:07:45.048 "is_configured": true, 00:07:45.048 "data_offset": 2048, 00:07:45.048 "data_size": 63488 00:07:45.048 } 00:07:45.048 ] 00:07:45.048 }' 00:07:45.048 19:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.048 19:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.305 19:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:45.305 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.305 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.305 [2024-11-26 19:48:36.084197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.305 [2024-11-26 19:48:36.084236] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.305 [2024-11-26 19:48:36.086680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.305 [2024-11-26 19:48:36.086722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.305 [2024-11-26 19:48:36.086820] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.305 [2024-11-26 19:48:36.086829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:45.305 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.305 { 00:07:45.305 "results": [ 00:07:45.305 { 00:07:45.305 "job": "raid_bdev1", 00:07:45.305 "core_mask": "0x1", 00:07:45.305 "workload": "randrw", 00:07:45.305 "percentage": 50, 00:07:45.305 "status": "finished", 00:07:45.305 "queue_depth": 1, 00:07:45.305 "io_size": 131072, 00:07:45.305 "runtime": 1.261639, 00:07:45.305 "iops": 14647.613144489034, 00:07:45.305 "mibps": 1830.9516430611293, 00:07:45.305 "io_failed": 0, 00:07:45.305 "io_timeout": 0, 00:07:45.305 "avg_latency_us": 65.67638761238761, 00:07:45.305 "min_latency_us": 24.024615384615384, 00:07:45.305 "max_latency_us": 1424.1476923076923 00:07:45.305 } 00:07:45.305 ], 00:07:45.305 "core_count": 1 00:07:45.305 } 00:07:45.305 19:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67408 00:07:45.305 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67408 ']' 00:07:45.305 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67408 00:07:45.305 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:45.305 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.305 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67408 00:07:45.306 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.306 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.306 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67408' 00:07:45.306 killing process with pid 67408 00:07:45.306 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67408 00:07:45.306 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67408 00:07:45.306 [2024-11-26 19:48:36.116388] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.306 [2024-11-26 19:48:36.237592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:46.331 19:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:46.331 19:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:46.331 19:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Kr12Kal5bM 00:07:46.331 19:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:46.331 19:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:46.331 19:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:46.331 19:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:46.331 ************************************ 00:07:46.331 END TEST raid_read_error_test 00:07:46.331 ************************************ 00:07:46.331 19:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:46.331 00:07:46.331 real 0m3.761s 00:07:46.331 user 0m4.633s 00:07:46.331 sys 0m0.442s 00:07:46.331 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.331 19:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.331 19:48:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:07:46.331 19:48:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:46.331 19:48:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.331 19:48:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:46.331 ************************************ 00:07:46.331 START TEST raid_write_error_test 00:07:46.331 ************************************ 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ImyeTjDY1g 00:07:46.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67542 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67542 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67542 ']' 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.331 19:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.331 [2024-11-26 19:48:37.001818] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:46.331 [2024-11-26 19:48:37.001956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67542 ] 00:07:46.331 [2024-11-26 19:48:37.157090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.331 [2024-11-26 19:48:37.259369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.589 [2024-11-26 19:48:37.381062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.589 [2024-11-26 19:48:37.381112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.156 BaseBdev1_malloc 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.156 true 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.156 [2024-11-26 19:48:37.849495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:47.156 [2024-11-26 19:48:37.849552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.156 [2024-11-26 19:48:37.849572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:47.156 [2024-11-26 19:48:37.849582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.156 [2024-11-26 19:48:37.851602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.156 [2024-11-26 19:48:37.851637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:47.156 BaseBdev1 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.156 BaseBdev2_malloc 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.156 true 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.156 [2024-11-26 19:48:37.891508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:47.156 [2024-11-26 19:48:37.891565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.156 [2024-11-26 19:48:37.891582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:47.156 [2024-11-26 19:48:37.891592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.156 [2024-11-26 19:48:37.893575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.156 [2024-11-26 19:48:37.893608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:47.156 BaseBdev2 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.156 BaseBdev3_malloc 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.156 true 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.156 [2024-11-26 19:48:37.945618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:47.156 [2024-11-26 19:48:37.945676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.156 [2024-11-26 19:48:37.945695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:47.156 [2024-11-26 19:48:37.945705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.156 [2024-11-26 19:48:37.947718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.156 [2024-11-26 19:48:37.947753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:47.156 BaseBdev3 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.156 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.157 [2024-11-26 19:48:37.953697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.157 [2024-11-26 19:48:37.955433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.157 [2024-11-26 19:48:37.955502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:47.157 [2024-11-26 19:48:37.955697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:47.157 [2024-11-26 19:48:37.955706] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:47.157 [2024-11-26 19:48:37.955959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:07:47.157 [2024-11-26 19:48:37.956102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:47.157 [2024-11-26 19:48:37.956112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:47.157 [2024-11-26 19:48:37.956252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.157 "name": "raid_bdev1", 00:07:47.157 "uuid": "bae366b8-00cd-4c54-99b8-386d792caa0b", 00:07:47.157 "strip_size_kb": 0, 00:07:47.157 "state": "online", 00:07:47.157 "raid_level": "raid1", 00:07:47.157 "superblock": true, 00:07:47.157 "num_base_bdevs": 3, 00:07:47.157 "num_base_bdevs_discovered": 3, 00:07:47.157 "num_base_bdevs_operational": 3, 00:07:47.157 "base_bdevs_list": [ 00:07:47.157 { 00:07:47.157 "name": "BaseBdev1", 00:07:47.157 "uuid": "5fcce3a4-b540-5aa8-bc31-1b90b6d040fb", 00:07:47.157 "is_configured": true, 00:07:47.157 "data_offset": 2048, 00:07:47.157 "data_size": 63488 00:07:47.157 }, 00:07:47.157 { 00:07:47.157 "name": "BaseBdev2", 00:07:47.157 "uuid": "72d06d2e-cbca-5afd-9e5c-bdd3b86b9cdb", 00:07:47.157 "is_configured": true, 00:07:47.157 "data_offset": 2048, 00:07:47.157 "data_size": 63488 00:07:47.157 }, 00:07:47.157 { 00:07:47.157 "name": "BaseBdev3", 00:07:47.157 "uuid": "bc82baa7-ff1d-5e28-9245-669d5bd4b5cc", 00:07:47.157 "is_configured": true, 00:07:47.157 "data_offset": 2048, 00:07:47.157 "data_size": 63488 00:07:47.157 } 00:07:47.157 ] 00:07:47.157 }' 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.157 19:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.416 19:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:47.416 19:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:47.674 [2024-11-26 19:48:38.390634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.608 [2024-11-26 19:48:39.311567] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:48.608 [2024-11-26 19:48:39.311629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:48.608 [2024-11-26 19:48:39.311833] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.608 "name": "raid_bdev1", 00:07:48.608 "uuid": "bae366b8-00cd-4c54-99b8-386d792caa0b", 00:07:48.608 "strip_size_kb": 0, 00:07:48.608 "state": "online", 00:07:48.608 "raid_level": "raid1", 00:07:48.608 "superblock": true, 00:07:48.608 "num_base_bdevs": 3, 00:07:48.608 "num_base_bdevs_discovered": 2, 00:07:48.608 "num_base_bdevs_operational": 2, 00:07:48.608 "base_bdevs_list": [ 00:07:48.608 { 00:07:48.608 "name": null, 00:07:48.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.608 "is_configured": false, 00:07:48.608 "data_offset": 0, 00:07:48.608 "data_size": 63488 00:07:48.608 }, 00:07:48.608 { 00:07:48.608 "name": "BaseBdev2", 00:07:48.608 "uuid": "72d06d2e-cbca-5afd-9e5c-bdd3b86b9cdb", 00:07:48.608 "is_configured": true, 00:07:48.608 "data_offset": 2048, 00:07:48.608 "data_size": 63488 00:07:48.608 }, 00:07:48.608 { 00:07:48.608 "name": "BaseBdev3", 00:07:48.608 "uuid": "bc82baa7-ff1d-5e28-9245-669d5bd4b5cc", 00:07:48.608 "is_configured": true, 00:07:48.608 "data_offset": 2048, 00:07:48.608 "data_size": 63488 00:07:48.608 } 00:07:48.608 ] 00:07:48.608 }' 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.608 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.866 [2024-11-26 19:48:39.629663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:48.866 [2024-11-26 19:48:39.629852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.866 [2024-11-26 19:48:39.632307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.866 [2024-11-26 19:48:39.632366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.866 [2024-11-26 19:48:39.632447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.866 [2024-11-26 19:48:39.632460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:48.866 { 00:07:48.866 "results": [ 00:07:48.866 { 00:07:48.866 "job": "raid_bdev1", 00:07:48.866 "core_mask": "0x1", 00:07:48.866 "workload": "randrw", 00:07:48.866 "percentage": 50, 00:07:48.866 "status": "finished", 00:07:48.866 "queue_depth": 1, 00:07:48.866 "io_size": 131072, 00:07:48.866 "runtime": 1.23725, 00:07:48.866 "iops": 15792.281268943221, 00:07:48.866 "mibps": 1974.0351586179027, 00:07:48.866 "io_failed": 0, 00:07:48.866 "io_timeout": 0, 00:07:48.866 "avg_latency_us": 60.72415453117434, 00:07:48.866 "min_latency_us": 23.335384615384616, 00:07:48.866 "max_latency_us": 1386.3384615384616 00:07:48.866 } 00:07:48.866 ], 00:07:48.866 "core_count": 1 00:07:48.866 } 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67542 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67542 ']' 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67542 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67542 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.866 killing process with pid 67542 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67542' 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67542 00:07:48.866 19:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67542 00:07:48.866 [2024-11-26 19:48:39.656473] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.866 [2024-11-26 19:48:39.779179] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.802 19:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ImyeTjDY1g 00:07:49.802 19:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:49.802 19:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:49.802 19:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:49.802 19:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:49.802 19:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:49.802 19:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:49.802 19:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:49.802 00:07:49.802 real 0m3.520s 00:07:49.802 user 0m4.167s 00:07:49.802 sys 0m0.429s 00:07:49.802 19:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.802 19:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.802 ************************************ 00:07:49.802 END TEST raid_write_error_test 00:07:49.802 ************************************ 00:07:49.802 19:48:40 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:49.802 19:48:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:49.802 19:48:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:07:49.802 19:48:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:49.802 19:48:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.802 19:48:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.802 ************************************ 00:07:49.802 START TEST raid_state_function_test 00:07:49.802 ************************************ 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67675 00:07:49.802 Process raid pid: 67675 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67675' 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67675 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67675 ']' 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.802 19:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.802 [2024-11-26 19:48:40.559419] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:49.802 [2024-11-26 19:48:40.560022] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.802 [2024-11-26 19:48:40.712916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.061 [2024-11-26 19:48:40.815407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.061 [2024-11-26 19:48:40.938529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.061 [2024-11-26 19:48:40.938577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.628 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.628 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:50.628 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:07:50.628 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.628 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.628 [2024-11-26 19:48:41.397069] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.628 [2024-11-26 19:48:41.397124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.628 [2024-11-26 19:48:41.397133] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.628 [2024-11-26 19:48:41.397141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.628 [2024-11-26 19:48:41.397146] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:50.628 [2024-11-26 19:48:41.397154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:50.628 [2024-11-26 19:48:41.397159] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:07:50.628 [2024-11-26 19:48:41.397166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:07:50.628 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.628 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:50.628 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.629 "name": "Existed_Raid", 00:07:50.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.629 "strip_size_kb": 64, 00:07:50.629 "state": "configuring", 00:07:50.629 "raid_level": "raid0", 00:07:50.629 "superblock": false, 00:07:50.629 "num_base_bdevs": 4, 00:07:50.629 "num_base_bdevs_discovered": 0, 00:07:50.629 "num_base_bdevs_operational": 4, 00:07:50.629 "base_bdevs_list": [ 00:07:50.629 { 00:07:50.629 "name": "BaseBdev1", 00:07:50.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.629 "is_configured": false, 00:07:50.629 "data_offset": 0, 00:07:50.629 "data_size": 0 00:07:50.629 }, 00:07:50.629 { 00:07:50.629 "name": "BaseBdev2", 00:07:50.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.629 "is_configured": false, 00:07:50.629 "data_offset": 0, 00:07:50.629 "data_size": 0 00:07:50.629 }, 00:07:50.629 { 00:07:50.629 "name": "BaseBdev3", 00:07:50.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.629 "is_configured": false, 00:07:50.629 "data_offset": 0, 00:07:50.629 "data_size": 0 00:07:50.629 }, 00:07:50.629 { 00:07:50.629 "name": "BaseBdev4", 00:07:50.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.629 "is_configured": false, 00:07:50.629 "data_offset": 0, 00:07:50.629 "data_size": 0 00:07:50.629 } 00:07:50.629 ] 00:07:50.629 }' 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.629 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.888 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.888 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.888 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.888 [2024-11-26 19:48:41.753081] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.888 [2024-11-26 19:48:41.753126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:50.888 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.888 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:07:50.888 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.888 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.888 [2024-11-26 19:48:41.761080] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.889 [2024-11-26 19:48:41.761124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.889 [2024-11-26 19:48:41.761132] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.889 [2024-11-26 19:48:41.761140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.889 [2024-11-26 19:48:41.761145] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:50.889 [2024-11-26 19:48:41.761153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:50.889 [2024-11-26 19:48:41.761158] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:07:50.889 [2024-11-26 19:48:41.761166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.889 [2024-11-26 19:48:41.791366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.889 BaseBdev1 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.889 [ 00:07:50.889 { 00:07:50.889 "name": "BaseBdev1", 00:07:50.889 "aliases": [ 00:07:50.889 "5174023f-ffc9-40ee-8a04-f131088f324b" 00:07:50.889 ], 00:07:50.889 "product_name": "Malloc disk", 00:07:50.889 "block_size": 512, 00:07:50.889 "num_blocks": 65536, 00:07:50.889 "uuid": "5174023f-ffc9-40ee-8a04-f131088f324b", 00:07:50.889 "assigned_rate_limits": { 00:07:50.889 "rw_ios_per_sec": 0, 00:07:50.889 "rw_mbytes_per_sec": 0, 00:07:50.889 "r_mbytes_per_sec": 0, 00:07:50.889 "w_mbytes_per_sec": 0 00:07:50.889 }, 00:07:50.889 "claimed": true, 00:07:50.889 "claim_type": "exclusive_write", 00:07:50.889 "zoned": false, 00:07:50.889 "supported_io_types": { 00:07:50.889 "read": true, 00:07:50.889 "write": true, 00:07:50.889 "unmap": true, 00:07:50.889 "flush": true, 00:07:50.889 "reset": true, 00:07:50.889 "nvme_admin": false, 00:07:50.889 "nvme_io": false, 00:07:50.889 "nvme_io_md": false, 00:07:50.889 "write_zeroes": true, 00:07:50.889 "zcopy": true, 00:07:50.889 "get_zone_info": false, 00:07:50.889 "zone_management": false, 00:07:50.889 "zone_append": false, 00:07:50.889 "compare": false, 00:07:50.889 "compare_and_write": false, 00:07:50.889 "abort": true, 00:07:50.889 "seek_hole": false, 00:07:50.889 "seek_data": false, 00:07:50.889 "copy": true, 00:07:50.889 "nvme_iov_md": false 00:07:50.889 }, 00:07:50.889 "memory_domains": [ 00:07:50.889 { 00:07:50.889 "dma_device_id": "system", 00:07:50.889 "dma_device_type": 1 00:07:50.889 }, 00:07:50.889 { 00:07:50.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.889 "dma_device_type": 2 00:07:50.889 } 00:07:50.889 ], 00:07:50.889 "driver_specific": {} 00:07:50.889 } 00:07:50.889 ] 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.889 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.208 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.208 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.208 "name": "Existed_Raid", 00:07:51.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.208 "strip_size_kb": 64, 00:07:51.208 "state": "configuring", 00:07:51.208 "raid_level": "raid0", 00:07:51.208 "superblock": false, 00:07:51.208 "num_base_bdevs": 4, 00:07:51.208 "num_base_bdevs_discovered": 1, 00:07:51.208 "num_base_bdevs_operational": 4, 00:07:51.208 "base_bdevs_list": [ 00:07:51.208 { 00:07:51.208 "name": "BaseBdev1", 00:07:51.208 "uuid": "5174023f-ffc9-40ee-8a04-f131088f324b", 00:07:51.208 "is_configured": true, 00:07:51.208 "data_offset": 0, 00:07:51.208 "data_size": 65536 00:07:51.208 }, 00:07:51.208 { 00:07:51.208 "name": "BaseBdev2", 00:07:51.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.208 "is_configured": false, 00:07:51.208 "data_offset": 0, 00:07:51.208 "data_size": 0 00:07:51.208 }, 00:07:51.208 { 00:07:51.208 "name": "BaseBdev3", 00:07:51.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.208 "is_configured": false, 00:07:51.208 "data_offset": 0, 00:07:51.208 "data_size": 0 00:07:51.208 }, 00:07:51.208 { 00:07:51.208 "name": "BaseBdev4", 00:07:51.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.209 "is_configured": false, 00:07:51.209 "data_offset": 0, 00:07:51.209 "data_size": 0 00:07:51.209 } 00:07:51.209 ] 00:07:51.209 }' 00:07:51.209 19:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.209 19:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.466 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.466 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.466 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.466 [2024-11-26 19:48:42.143480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.466 [2024-11-26 19:48:42.143540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:51.466 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.466 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:07:51.466 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.466 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.466 [2024-11-26 19:48:42.151551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.467 [2024-11-26 19:48:42.153294] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.467 [2024-11-26 19:48:42.153339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.467 [2024-11-26 19:48:42.153358] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:51.467 [2024-11-26 19:48:42.153368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:51.467 [2024-11-26 19:48:42.153374] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:07:51.467 [2024-11-26 19:48:42.153381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.467 "name": "Existed_Raid", 00:07:51.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.467 "strip_size_kb": 64, 00:07:51.467 "state": "configuring", 00:07:51.467 "raid_level": "raid0", 00:07:51.467 "superblock": false, 00:07:51.467 "num_base_bdevs": 4, 00:07:51.467 "num_base_bdevs_discovered": 1, 00:07:51.467 "num_base_bdevs_operational": 4, 00:07:51.467 "base_bdevs_list": [ 00:07:51.467 { 00:07:51.467 "name": "BaseBdev1", 00:07:51.467 "uuid": "5174023f-ffc9-40ee-8a04-f131088f324b", 00:07:51.467 "is_configured": true, 00:07:51.467 "data_offset": 0, 00:07:51.467 "data_size": 65536 00:07:51.467 }, 00:07:51.467 { 00:07:51.467 "name": "BaseBdev2", 00:07:51.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.467 "is_configured": false, 00:07:51.467 "data_offset": 0, 00:07:51.467 "data_size": 0 00:07:51.467 }, 00:07:51.467 { 00:07:51.467 "name": "BaseBdev3", 00:07:51.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.467 "is_configured": false, 00:07:51.467 "data_offset": 0, 00:07:51.467 "data_size": 0 00:07:51.467 }, 00:07:51.467 { 00:07:51.467 "name": "BaseBdev4", 00:07:51.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.467 "is_configured": false, 00:07:51.467 "data_offset": 0, 00:07:51.467 "data_size": 0 00:07:51.467 } 00:07:51.467 ] 00:07:51.467 }' 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.467 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.726 [2024-11-26 19:48:42.511883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.726 BaseBdev2 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.726 [ 00:07:51.726 { 00:07:51.726 "name": "BaseBdev2", 00:07:51.726 "aliases": [ 00:07:51.726 "da101a11-4acd-4689-97ce-09cb9d7716ea" 00:07:51.726 ], 00:07:51.726 "product_name": "Malloc disk", 00:07:51.726 "block_size": 512, 00:07:51.726 "num_blocks": 65536, 00:07:51.726 "uuid": "da101a11-4acd-4689-97ce-09cb9d7716ea", 00:07:51.726 "assigned_rate_limits": { 00:07:51.726 "rw_ios_per_sec": 0, 00:07:51.726 "rw_mbytes_per_sec": 0, 00:07:51.726 "r_mbytes_per_sec": 0, 00:07:51.726 "w_mbytes_per_sec": 0 00:07:51.726 }, 00:07:51.726 "claimed": true, 00:07:51.726 "claim_type": "exclusive_write", 00:07:51.726 "zoned": false, 00:07:51.726 "supported_io_types": { 00:07:51.726 "read": true, 00:07:51.726 "write": true, 00:07:51.726 "unmap": true, 00:07:51.726 "flush": true, 00:07:51.726 "reset": true, 00:07:51.726 "nvme_admin": false, 00:07:51.726 "nvme_io": false, 00:07:51.726 "nvme_io_md": false, 00:07:51.726 "write_zeroes": true, 00:07:51.726 "zcopy": true, 00:07:51.726 "get_zone_info": false, 00:07:51.726 "zone_management": false, 00:07:51.726 "zone_append": false, 00:07:51.726 "compare": false, 00:07:51.726 "compare_and_write": false, 00:07:51.726 "abort": true, 00:07:51.726 "seek_hole": false, 00:07:51.726 "seek_data": false, 00:07:51.726 "copy": true, 00:07:51.726 "nvme_iov_md": false 00:07:51.726 }, 00:07:51.726 "memory_domains": [ 00:07:51.726 { 00:07:51.726 "dma_device_id": "system", 00:07:51.726 "dma_device_type": 1 00:07:51.726 }, 00:07:51.726 { 00:07:51.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.726 "dma_device_type": 2 00:07:51.726 } 00:07:51.726 ], 00:07:51.726 "driver_specific": {} 00:07:51.726 } 00:07:51.726 ] 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.726 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.727 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.727 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.727 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.727 "name": "Existed_Raid", 00:07:51.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.727 "strip_size_kb": 64, 00:07:51.727 "state": "configuring", 00:07:51.727 "raid_level": "raid0", 00:07:51.727 "superblock": false, 00:07:51.727 "num_base_bdevs": 4, 00:07:51.727 "num_base_bdevs_discovered": 2, 00:07:51.727 "num_base_bdevs_operational": 4, 00:07:51.727 "base_bdevs_list": [ 00:07:51.727 { 00:07:51.727 "name": "BaseBdev1", 00:07:51.727 "uuid": "5174023f-ffc9-40ee-8a04-f131088f324b", 00:07:51.727 "is_configured": true, 00:07:51.727 "data_offset": 0, 00:07:51.727 "data_size": 65536 00:07:51.727 }, 00:07:51.727 { 00:07:51.727 "name": "BaseBdev2", 00:07:51.727 "uuid": "da101a11-4acd-4689-97ce-09cb9d7716ea", 00:07:51.727 "is_configured": true, 00:07:51.727 "data_offset": 0, 00:07:51.727 "data_size": 65536 00:07:51.727 }, 00:07:51.727 { 00:07:51.727 "name": "BaseBdev3", 00:07:51.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.727 "is_configured": false, 00:07:51.727 "data_offset": 0, 00:07:51.727 "data_size": 0 00:07:51.727 }, 00:07:51.727 { 00:07:51.727 "name": "BaseBdev4", 00:07:51.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.727 "is_configured": false, 00:07:51.727 "data_offset": 0, 00:07:51.727 "data_size": 0 00:07:51.727 } 00:07:51.727 ] 00:07:51.727 }' 00:07:51.727 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.727 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.985 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:51.985 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.985 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.985 [2024-11-26 19:48:42.914679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:51.985 BaseBdev3 00:07:51.985 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.985 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:51.985 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:51.985 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.985 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:51.985 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.985 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.985 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:51.985 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.985 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.244 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.244 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:52.244 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.244 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.244 [ 00:07:52.244 { 00:07:52.244 "name": "BaseBdev3", 00:07:52.244 "aliases": [ 00:07:52.244 "ebe46f2c-cb58-4d29-b724-a132e4757374" 00:07:52.244 ], 00:07:52.244 "product_name": "Malloc disk", 00:07:52.244 "block_size": 512, 00:07:52.244 "num_blocks": 65536, 00:07:52.244 "uuid": "ebe46f2c-cb58-4d29-b724-a132e4757374", 00:07:52.244 "assigned_rate_limits": { 00:07:52.244 "rw_ios_per_sec": 0, 00:07:52.244 "rw_mbytes_per_sec": 0, 00:07:52.244 "r_mbytes_per_sec": 0, 00:07:52.244 "w_mbytes_per_sec": 0 00:07:52.244 }, 00:07:52.244 "claimed": true, 00:07:52.244 "claim_type": "exclusive_write", 00:07:52.244 "zoned": false, 00:07:52.244 "supported_io_types": { 00:07:52.244 "read": true, 00:07:52.244 "write": true, 00:07:52.244 "unmap": true, 00:07:52.244 "flush": true, 00:07:52.244 "reset": true, 00:07:52.244 "nvme_admin": false, 00:07:52.244 "nvme_io": false, 00:07:52.244 "nvme_io_md": false, 00:07:52.244 "write_zeroes": true, 00:07:52.244 "zcopy": true, 00:07:52.244 "get_zone_info": false, 00:07:52.244 "zone_management": false, 00:07:52.244 "zone_append": false, 00:07:52.244 "compare": false, 00:07:52.244 "compare_and_write": false, 00:07:52.244 "abort": true, 00:07:52.244 "seek_hole": false, 00:07:52.244 "seek_data": false, 00:07:52.244 "copy": true, 00:07:52.245 "nvme_iov_md": false 00:07:52.245 }, 00:07:52.245 "memory_domains": [ 00:07:52.245 { 00:07:52.245 "dma_device_id": "system", 00:07:52.245 "dma_device_type": 1 00:07:52.245 }, 00:07:52.245 { 00:07:52.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.245 "dma_device_type": 2 00:07:52.245 } 00:07:52.245 ], 00:07:52.245 "driver_specific": {} 00:07:52.245 } 00:07:52.245 ] 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.245 "name": "Existed_Raid", 00:07:52.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.245 "strip_size_kb": 64, 00:07:52.245 "state": "configuring", 00:07:52.245 "raid_level": "raid0", 00:07:52.245 "superblock": false, 00:07:52.245 "num_base_bdevs": 4, 00:07:52.245 "num_base_bdevs_discovered": 3, 00:07:52.245 "num_base_bdevs_operational": 4, 00:07:52.245 "base_bdevs_list": [ 00:07:52.245 { 00:07:52.245 "name": "BaseBdev1", 00:07:52.245 "uuid": "5174023f-ffc9-40ee-8a04-f131088f324b", 00:07:52.245 "is_configured": true, 00:07:52.245 "data_offset": 0, 00:07:52.245 "data_size": 65536 00:07:52.245 }, 00:07:52.245 { 00:07:52.245 "name": "BaseBdev2", 00:07:52.245 "uuid": "da101a11-4acd-4689-97ce-09cb9d7716ea", 00:07:52.245 "is_configured": true, 00:07:52.245 "data_offset": 0, 00:07:52.245 "data_size": 65536 00:07:52.245 }, 00:07:52.245 { 00:07:52.245 "name": "BaseBdev3", 00:07:52.245 "uuid": "ebe46f2c-cb58-4d29-b724-a132e4757374", 00:07:52.245 "is_configured": true, 00:07:52.245 "data_offset": 0, 00:07:52.245 "data_size": 65536 00:07:52.245 }, 00:07:52.245 { 00:07:52.245 "name": "BaseBdev4", 00:07:52.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.245 "is_configured": false, 00:07:52.245 "data_offset": 0, 00:07:52.245 "data_size": 0 00:07:52.245 } 00:07:52.245 ] 00:07:52.245 }' 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.245 19:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.503 [2024-11-26 19:48:43.279201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:07:52.503 [2024-11-26 19:48:43.279262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.503 [2024-11-26 19:48:43.279271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:07:52.503 [2024-11-26 19:48:43.279533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:52.503 [2024-11-26 19:48:43.279669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.503 [2024-11-26 19:48:43.279680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:52.503 [2024-11-26 19:48:43.279914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.503 BaseBdev4 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.503 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.503 [ 00:07:52.503 { 00:07:52.503 "name": "BaseBdev4", 00:07:52.503 "aliases": [ 00:07:52.504 "09e2c5a8-9308-4041-87b0-1e32e8084072" 00:07:52.504 ], 00:07:52.504 "product_name": "Malloc disk", 00:07:52.504 "block_size": 512, 00:07:52.504 "num_blocks": 65536, 00:07:52.504 "uuid": "09e2c5a8-9308-4041-87b0-1e32e8084072", 00:07:52.504 "assigned_rate_limits": { 00:07:52.504 "rw_ios_per_sec": 0, 00:07:52.504 "rw_mbytes_per_sec": 0, 00:07:52.504 "r_mbytes_per_sec": 0, 00:07:52.504 "w_mbytes_per_sec": 0 00:07:52.504 }, 00:07:52.504 "claimed": true, 00:07:52.504 "claim_type": "exclusive_write", 00:07:52.504 "zoned": false, 00:07:52.504 "supported_io_types": { 00:07:52.504 "read": true, 00:07:52.504 "write": true, 00:07:52.504 "unmap": true, 00:07:52.504 "flush": true, 00:07:52.504 "reset": true, 00:07:52.504 "nvme_admin": false, 00:07:52.504 "nvme_io": false, 00:07:52.504 "nvme_io_md": false, 00:07:52.504 "write_zeroes": true, 00:07:52.504 "zcopy": true, 00:07:52.504 "get_zone_info": false, 00:07:52.504 "zone_management": false, 00:07:52.504 "zone_append": false, 00:07:52.504 "compare": false, 00:07:52.504 "compare_and_write": false, 00:07:52.504 "abort": true, 00:07:52.504 "seek_hole": false, 00:07:52.504 "seek_data": false, 00:07:52.504 "copy": true, 00:07:52.504 "nvme_iov_md": false 00:07:52.504 }, 00:07:52.504 "memory_domains": [ 00:07:52.504 { 00:07:52.504 "dma_device_id": "system", 00:07:52.504 "dma_device_type": 1 00:07:52.504 }, 00:07:52.504 { 00:07:52.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.504 "dma_device_type": 2 00:07:52.504 } 00:07:52.504 ], 00:07:52.504 "driver_specific": {} 00:07:52.504 } 00:07:52.504 ] 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.504 "name": "Existed_Raid", 00:07:52.504 "uuid": "957483a8-ea93-4178-baca-31d91bdf069b", 00:07:52.504 "strip_size_kb": 64, 00:07:52.504 "state": "online", 00:07:52.504 "raid_level": "raid0", 00:07:52.504 "superblock": false, 00:07:52.504 "num_base_bdevs": 4, 00:07:52.504 "num_base_bdevs_discovered": 4, 00:07:52.504 "num_base_bdevs_operational": 4, 00:07:52.504 "base_bdevs_list": [ 00:07:52.504 { 00:07:52.504 "name": "BaseBdev1", 00:07:52.504 "uuid": "5174023f-ffc9-40ee-8a04-f131088f324b", 00:07:52.504 "is_configured": true, 00:07:52.504 "data_offset": 0, 00:07:52.504 "data_size": 65536 00:07:52.504 }, 00:07:52.504 { 00:07:52.504 "name": "BaseBdev2", 00:07:52.504 "uuid": "da101a11-4acd-4689-97ce-09cb9d7716ea", 00:07:52.504 "is_configured": true, 00:07:52.504 "data_offset": 0, 00:07:52.504 "data_size": 65536 00:07:52.504 }, 00:07:52.504 { 00:07:52.504 "name": "BaseBdev3", 00:07:52.504 "uuid": "ebe46f2c-cb58-4d29-b724-a132e4757374", 00:07:52.504 "is_configured": true, 00:07:52.504 "data_offset": 0, 00:07:52.504 "data_size": 65536 00:07:52.504 }, 00:07:52.504 { 00:07:52.504 "name": "BaseBdev4", 00:07:52.504 "uuid": "09e2c5a8-9308-4041-87b0-1e32e8084072", 00:07:52.504 "is_configured": true, 00:07:52.504 "data_offset": 0, 00:07:52.504 "data_size": 65536 00:07:52.504 } 00:07:52.504 ] 00:07:52.504 }' 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.504 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.781 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:52.781 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:52.781 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.781 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.781 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.781 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.781 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:52.781 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.781 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.781 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.781 [2024-11-26 19:48:43.635649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.781 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.781 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.781 "name": "Existed_Raid", 00:07:52.781 "aliases": [ 00:07:52.781 "957483a8-ea93-4178-baca-31d91bdf069b" 00:07:52.781 ], 00:07:52.781 "product_name": "Raid Volume", 00:07:52.781 "block_size": 512, 00:07:52.781 "num_blocks": 262144, 00:07:52.781 "uuid": "957483a8-ea93-4178-baca-31d91bdf069b", 00:07:52.781 "assigned_rate_limits": { 00:07:52.781 "rw_ios_per_sec": 0, 00:07:52.781 "rw_mbytes_per_sec": 0, 00:07:52.781 "r_mbytes_per_sec": 0, 00:07:52.781 "w_mbytes_per_sec": 0 00:07:52.781 }, 00:07:52.781 "claimed": false, 00:07:52.781 "zoned": false, 00:07:52.781 "supported_io_types": { 00:07:52.781 "read": true, 00:07:52.781 "write": true, 00:07:52.781 "unmap": true, 00:07:52.781 "flush": true, 00:07:52.781 "reset": true, 00:07:52.781 "nvme_admin": false, 00:07:52.781 "nvme_io": false, 00:07:52.781 "nvme_io_md": false, 00:07:52.781 "write_zeroes": true, 00:07:52.781 "zcopy": false, 00:07:52.781 "get_zone_info": false, 00:07:52.781 "zone_management": false, 00:07:52.781 "zone_append": false, 00:07:52.781 "compare": false, 00:07:52.781 "compare_and_write": false, 00:07:52.781 "abort": false, 00:07:52.781 "seek_hole": false, 00:07:52.781 "seek_data": false, 00:07:52.781 "copy": false, 00:07:52.781 "nvme_iov_md": false 00:07:52.781 }, 00:07:52.781 "memory_domains": [ 00:07:52.781 { 00:07:52.781 "dma_device_id": "system", 00:07:52.781 "dma_device_type": 1 00:07:52.781 }, 00:07:52.781 { 00:07:52.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.781 "dma_device_type": 2 00:07:52.781 }, 00:07:52.781 { 00:07:52.781 "dma_device_id": "system", 00:07:52.781 "dma_device_type": 1 00:07:52.781 }, 00:07:52.781 { 00:07:52.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.781 "dma_device_type": 2 00:07:52.781 }, 00:07:52.781 { 00:07:52.781 "dma_device_id": "system", 00:07:52.781 "dma_device_type": 1 00:07:52.781 }, 00:07:52.781 { 00:07:52.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.781 "dma_device_type": 2 00:07:52.781 }, 00:07:52.781 { 00:07:52.781 "dma_device_id": "system", 00:07:52.781 "dma_device_type": 1 00:07:52.781 }, 00:07:52.781 { 00:07:52.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.781 "dma_device_type": 2 00:07:52.781 } 00:07:52.781 ], 00:07:52.781 "driver_specific": { 00:07:52.781 "raid": { 00:07:52.781 "uuid": "957483a8-ea93-4178-baca-31d91bdf069b", 00:07:52.781 "strip_size_kb": 64, 00:07:52.781 "state": "online", 00:07:52.781 "raid_level": "raid0", 00:07:52.781 "superblock": false, 00:07:52.781 "num_base_bdevs": 4, 00:07:52.781 "num_base_bdevs_discovered": 4, 00:07:52.781 "num_base_bdevs_operational": 4, 00:07:52.781 "base_bdevs_list": [ 00:07:52.781 { 00:07:52.781 "name": "BaseBdev1", 00:07:52.781 "uuid": "5174023f-ffc9-40ee-8a04-f131088f324b", 00:07:52.781 "is_configured": true, 00:07:52.781 "data_offset": 0, 00:07:52.781 "data_size": 65536 00:07:52.781 }, 00:07:52.781 { 00:07:52.781 "name": "BaseBdev2", 00:07:52.781 "uuid": "da101a11-4acd-4689-97ce-09cb9d7716ea", 00:07:52.781 "is_configured": true, 00:07:52.781 "data_offset": 0, 00:07:52.781 "data_size": 65536 00:07:52.781 }, 00:07:52.781 { 00:07:52.781 "name": "BaseBdev3", 00:07:52.781 "uuid": "ebe46f2c-cb58-4d29-b724-a132e4757374", 00:07:52.781 "is_configured": true, 00:07:52.781 "data_offset": 0, 00:07:52.781 "data_size": 65536 00:07:52.781 }, 00:07:52.781 { 00:07:52.781 "name": "BaseBdev4", 00:07:52.782 "uuid": "09e2c5a8-9308-4041-87b0-1e32e8084072", 00:07:52.782 "is_configured": true, 00:07:52.782 "data_offset": 0, 00:07:52.782 "data_size": 65536 00:07:52.782 } 00:07:52.782 ] 00:07:52.782 } 00:07:52.782 } 00:07:52.782 }' 00:07:52.782 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:53.040 BaseBdev2 00:07:53.040 BaseBdev3 00:07:53.040 BaseBdev4' 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.040 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.041 [2024-11-26 19:48:43.847456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:53.041 [2024-11-26 19:48:43.847492] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.041 [2024-11-26 19:48:43.847544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.041 "name": "Existed_Raid", 00:07:53.041 "uuid": "957483a8-ea93-4178-baca-31d91bdf069b", 00:07:53.041 "strip_size_kb": 64, 00:07:53.041 "state": "offline", 00:07:53.041 "raid_level": "raid0", 00:07:53.041 "superblock": false, 00:07:53.041 "num_base_bdevs": 4, 00:07:53.041 "num_base_bdevs_discovered": 3, 00:07:53.041 "num_base_bdevs_operational": 3, 00:07:53.041 "base_bdevs_list": [ 00:07:53.041 { 00:07:53.041 "name": null, 00:07:53.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.041 "is_configured": false, 00:07:53.041 "data_offset": 0, 00:07:53.041 "data_size": 65536 00:07:53.041 }, 00:07:53.041 { 00:07:53.041 "name": "BaseBdev2", 00:07:53.041 "uuid": "da101a11-4acd-4689-97ce-09cb9d7716ea", 00:07:53.041 "is_configured": true, 00:07:53.041 "data_offset": 0, 00:07:53.041 "data_size": 65536 00:07:53.041 }, 00:07:53.041 { 00:07:53.041 "name": "BaseBdev3", 00:07:53.041 "uuid": "ebe46f2c-cb58-4d29-b724-a132e4757374", 00:07:53.041 "is_configured": true, 00:07:53.041 "data_offset": 0, 00:07:53.041 "data_size": 65536 00:07:53.041 }, 00:07:53.041 { 00:07:53.041 "name": "BaseBdev4", 00:07:53.041 "uuid": "09e2c5a8-9308-4041-87b0-1e32e8084072", 00:07:53.041 "is_configured": true, 00:07:53.041 "data_offset": 0, 00:07:53.041 "data_size": 65536 00:07:53.041 } 00:07:53.041 ] 00:07:53.041 }' 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.041 19:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.299 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:53.299 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.299 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:53.299 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.299 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.299 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.557 [2024-11-26 19:48:44.254087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.557 [2024-11-26 19:48:44.343572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.557 [2024-11-26 19:48:44.426294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:07:53.557 [2024-11-26 19:48:44.426361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.557 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.816 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.816 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:53.816 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:53.816 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:07:53.816 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:53.816 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:53.816 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:53.816 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.817 BaseBdev2 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.817 [ 00:07:53.817 { 00:07:53.817 "name": "BaseBdev2", 00:07:53.817 "aliases": [ 00:07:53.817 "eff5c0c1-83a5-4b3b-b69d-d298f196b5e1" 00:07:53.817 ], 00:07:53.817 "product_name": "Malloc disk", 00:07:53.817 "block_size": 512, 00:07:53.817 "num_blocks": 65536, 00:07:53.817 "uuid": "eff5c0c1-83a5-4b3b-b69d-d298f196b5e1", 00:07:53.817 "assigned_rate_limits": { 00:07:53.817 "rw_ios_per_sec": 0, 00:07:53.817 "rw_mbytes_per_sec": 0, 00:07:53.817 "r_mbytes_per_sec": 0, 00:07:53.817 "w_mbytes_per_sec": 0 00:07:53.817 }, 00:07:53.817 "claimed": false, 00:07:53.817 "zoned": false, 00:07:53.817 "supported_io_types": { 00:07:53.817 "read": true, 00:07:53.817 "write": true, 00:07:53.817 "unmap": true, 00:07:53.817 "flush": true, 00:07:53.817 "reset": true, 00:07:53.817 "nvme_admin": false, 00:07:53.817 "nvme_io": false, 00:07:53.817 "nvme_io_md": false, 00:07:53.817 "write_zeroes": true, 00:07:53.817 "zcopy": true, 00:07:53.817 "get_zone_info": false, 00:07:53.817 "zone_management": false, 00:07:53.817 "zone_append": false, 00:07:53.817 "compare": false, 00:07:53.817 "compare_and_write": false, 00:07:53.817 "abort": true, 00:07:53.817 "seek_hole": false, 00:07:53.817 "seek_data": false, 00:07:53.817 "copy": true, 00:07:53.817 "nvme_iov_md": false 00:07:53.817 }, 00:07:53.817 "memory_domains": [ 00:07:53.817 { 00:07:53.817 "dma_device_id": "system", 00:07:53.817 "dma_device_type": 1 00:07:53.817 }, 00:07:53.817 { 00:07:53.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.817 "dma_device_type": 2 00:07:53.817 } 00:07:53.817 ], 00:07:53.817 "driver_specific": {} 00:07:53.817 } 00:07:53.817 ] 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.817 BaseBdev3 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.817 [ 00:07:53.817 { 00:07:53.817 "name": "BaseBdev3", 00:07:53.817 "aliases": [ 00:07:53.817 "73e38601-2ba8-472c-822d-b6c004cf6597" 00:07:53.817 ], 00:07:53.817 "product_name": "Malloc disk", 00:07:53.817 "block_size": 512, 00:07:53.817 "num_blocks": 65536, 00:07:53.817 "uuid": "73e38601-2ba8-472c-822d-b6c004cf6597", 00:07:53.817 "assigned_rate_limits": { 00:07:53.817 "rw_ios_per_sec": 0, 00:07:53.817 "rw_mbytes_per_sec": 0, 00:07:53.817 "r_mbytes_per_sec": 0, 00:07:53.817 "w_mbytes_per_sec": 0 00:07:53.817 }, 00:07:53.817 "claimed": false, 00:07:53.817 "zoned": false, 00:07:53.817 "supported_io_types": { 00:07:53.817 "read": true, 00:07:53.817 "write": true, 00:07:53.817 "unmap": true, 00:07:53.817 "flush": true, 00:07:53.817 "reset": true, 00:07:53.817 "nvme_admin": false, 00:07:53.817 "nvme_io": false, 00:07:53.817 "nvme_io_md": false, 00:07:53.817 "write_zeroes": true, 00:07:53.817 "zcopy": true, 00:07:53.817 "get_zone_info": false, 00:07:53.817 "zone_management": false, 00:07:53.817 "zone_append": false, 00:07:53.817 "compare": false, 00:07:53.817 "compare_and_write": false, 00:07:53.817 "abort": true, 00:07:53.817 "seek_hole": false, 00:07:53.817 "seek_data": false, 00:07:53.817 "copy": true, 00:07:53.817 "nvme_iov_md": false 00:07:53.817 }, 00:07:53.817 "memory_domains": [ 00:07:53.817 { 00:07:53.817 "dma_device_id": "system", 00:07:53.817 "dma_device_type": 1 00:07:53.817 }, 00:07:53.817 { 00:07:53.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.817 "dma_device_type": 2 00:07:53.817 } 00:07:53.817 ], 00:07:53.817 "driver_specific": {} 00:07:53.817 } 00:07:53.817 ] 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:07:53.817 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.818 BaseBdev4 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.818 [ 00:07:53.818 { 00:07:53.818 "name": "BaseBdev4", 00:07:53.818 "aliases": [ 00:07:53.818 "b3022295-7f12-4fd4-b619-586216ddbfd1" 00:07:53.818 ], 00:07:53.818 "product_name": "Malloc disk", 00:07:53.818 "block_size": 512, 00:07:53.818 "num_blocks": 65536, 00:07:53.818 "uuid": "b3022295-7f12-4fd4-b619-586216ddbfd1", 00:07:53.818 "assigned_rate_limits": { 00:07:53.818 "rw_ios_per_sec": 0, 00:07:53.818 "rw_mbytes_per_sec": 0, 00:07:53.818 "r_mbytes_per_sec": 0, 00:07:53.818 "w_mbytes_per_sec": 0 00:07:53.818 }, 00:07:53.818 "claimed": false, 00:07:53.818 "zoned": false, 00:07:53.818 "supported_io_types": { 00:07:53.818 "read": true, 00:07:53.818 "write": true, 00:07:53.818 "unmap": true, 00:07:53.818 "flush": true, 00:07:53.818 "reset": true, 00:07:53.818 "nvme_admin": false, 00:07:53.818 "nvme_io": false, 00:07:53.818 "nvme_io_md": false, 00:07:53.818 "write_zeroes": true, 00:07:53.818 "zcopy": true, 00:07:53.818 "get_zone_info": false, 00:07:53.818 "zone_management": false, 00:07:53.818 "zone_append": false, 00:07:53.818 "compare": false, 00:07:53.818 "compare_and_write": false, 00:07:53.818 "abort": true, 00:07:53.818 "seek_hole": false, 00:07:53.818 "seek_data": false, 00:07:53.818 "copy": true, 00:07:53.818 "nvme_iov_md": false 00:07:53.818 }, 00:07:53.818 "memory_domains": [ 00:07:53.818 { 00:07:53.818 "dma_device_id": "system", 00:07:53.818 "dma_device_type": 1 00:07:53.818 }, 00:07:53.818 { 00:07:53.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.818 "dma_device_type": 2 00:07:53.818 } 00:07:53.818 ], 00:07:53.818 "driver_specific": {} 00:07:53.818 } 00:07:53.818 ] 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.818 [2024-11-26 19:48:44.672320] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:53.818 [2024-11-26 19:48:44.672386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:53.818 [2024-11-26 19:48:44.672411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.818 [2024-11-26 19:48:44.674097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:53.818 [2024-11-26 19:48:44.674153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.818 "name": "Existed_Raid", 00:07:53.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.818 "strip_size_kb": 64, 00:07:53.818 "state": "configuring", 00:07:53.818 "raid_level": "raid0", 00:07:53.818 "superblock": false, 00:07:53.818 "num_base_bdevs": 4, 00:07:53.818 "num_base_bdevs_discovered": 3, 00:07:53.818 "num_base_bdevs_operational": 4, 00:07:53.818 "base_bdevs_list": [ 00:07:53.818 { 00:07:53.818 "name": "BaseBdev1", 00:07:53.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.818 "is_configured": false, 00:07:53.818 "data_offset": 0, 00:07:53.818 "data_size": 0 00:07:53.818 }, 00:07:53.818 { 00:07:53.818 "name": "BaseBdev2", 00:07:53.818 "uuid": "eff5c0c1-83a5-4b3b-b69d-d298f196b5e1", 00:07:53.818 "is_configured": true, 00:07:53.818 "data_offset": 0, 00:07:53.818 "data_size": 65536 00:07:53.818 }, 00:07:53.818 { 00:07:53.818 "name": "BaseBdev3", 00:07:53.818 "uuid": "73e38601-2ba8-472c-822d-b6c004cf6597", 00:07:53.818 "is_configured": true, 00:07:53.818 "data_offset": 0, 00:07:53.818 "data_size": 65536 00:07:53.818 }, 00:07:53.818 { 00:07:53.818 "name": "BaseBdev4", 00:07:53.818 "uuid": "b3022295-7f12-4fd4-b619-586216ddbfd1", 00:07:53.818 "is_configured": true, 00:07:53.818 "data_offset": 0, 00:07:53.818 "data_size": 65536 00:07:53.818 } 00:07:53.818 ] 00:07:53.818 }' 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.818 19:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.077 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:54.077 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.077 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.077 [2024-11-26 19:48:45.012377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.336 "name": "Existed_Raid", 00:07:54.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.336 "strip_size_kb": 64, 00:07:54.336 "state": "configuring", 00:07:54.336 "raid_level": "raid0", 00:07:54.336 "superblock": false, 00:07:54.336 "num_base_bdevs": 4, 00:07:54.336 "num_base_bdevs_discovered": 2, 00:07:54.336 "num_base_bdevs_operational": 4, 00:07:54.336 "base_bdevs_list": [ 00:07:54.336 { 00:07:54.336 "name": "BaseBdev1", 00:07:54.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.336 "is_configured": false, 00:07:54.336 "data_offset": 0, 00:07:54.336 "data_size": 0 00:07:54.336 }, 00:07:54.336 { 00:07:54.336 "name": null, 00:07:54.336 "uuid": "eff5c0c1-83a5-4b3b-b69d-d298f196b5e1", 00:07:54.336 "is_configured": false, 00:07:54.336 "data_offset": 0, 00:07:54.336 "data_size": 65536 00:07:54.336 }, 00:07:54.336 { 00:07:54.336 "name": "BaseBdev3", 00:07:54.336 "uuid": "73e38601-2ba8-472c-822d-b6c004cf6597", 00:07:54.336 "is_configured": true, 00:07:54.336 "data_offset": 0, 00:07:54.336 "data_size": 65536 00:07:54.336 }, 00:07:54.336 { 00:07:54.336 "name": "BaseBdev4", 00:07:54.336 "uuid": "b3022295-7f12-4fd4-b619-586216ddbfd1", 00:07:54.336 "is_configured": true, 00:07:54.336 "data_offset": 0, 00:07:54.336 "data_size": 65536 00:07:54.336 } 00:07:54.336 ] 00:07:54.336 }' 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.336 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.596 [2024-11-26 19:48:45.380844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.596 BaseBdev1 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.596 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.596 [ 00:07:54.596 { 00:07:54.597 "name": "BaseBdev1", 00:07:54.597 "aliases": [ 00:07:54.597 "cbd8c012-be70-4f42-98fc-0f722064b13b" 00:07:54.597 ], 00:07:54.597 "product_name": "Malloc disk", 00:07:54.597 "block_size": 512, 00:07:54.597 "num_blocks": 65536, 00:07:54.597 "uuid": "cbd8c012-be70-4f42-98fc-0f722064b13b", 00:07:54.597 "assigned_rate_limits": { 00:07:54.597 "rw_ios_per_sec": 0, 00:07:54.597 "rw_mbytes_per_sec": 0, 00:07:54.597 "r_mbytes_per_sec": 0, 00:07:54.597 "w_mbytes_per_sec": 0 00:07:54.597 }, 00:07:54.597 "claimed": true, 00:07:54.597 "claim_type": "exclusive_write", 00:07:54.597 "zoned": false, 00:07:54.597 "supported_io_types": { 00:07:54.597 "read": true, 00:07:54.597 "write": true, 00:07:54.597 "unmap": true, 00:07:54.597 "flush": true, 00:07:54.597 "reset": true, 00:07:54.597 "nvme_admin": false, 00:07:54.597 "nvme_io": false, 00:07:54.597 "nvme_io_md": false, 00:07:54.597 "write_zeroes": true, 00:07:54.597 "zcopy": true, 00:07:54.597 "get_zone_info": false, 00:07:54.597 "zone_management": false, 00:07:54.597 "zone_append": false, 00:07:54.597 "compare": false, 00:07:54.597 "compare_and_write": false, 00:07:54.597 "abort": true, 00:07:54.597 "seek_hole": false, 00:07:54.597 "seek_data": false, 00:07:54.597 "copy": true, 00:07:54.597 "nvme_iov_md": false 00:07:54.597 }, 00:07:54.597 "memory_domains": [ 00:07:54.597 { 00:07:54.597 "dma_device_id": "system", 00:07:54.597 "dma_device_type": 1 00:07:54.597 }, 00:07:54.597 { 00:07:54.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.597 "dma_device_type": 2 00:07:54.597 } 00:07:54.597 ], 00:07:54.597 "driver_specific": {} 00:07:54.597 } 00:07:54.597 ] 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.597 "name": "Existed_Raid", 00:07:54.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.597 "strip_size_kb": 64, 00:07:54.597 "state": "configuring", 00:07:54.597 "raid_level": "raid0", 00:07:54.597 "superblock": false, 00:07:54.597 "num_base_bdevs": 4, 00:07:54.597 "num_base_bdevs_discovered": 3, 00:07:54.597 "num_base_bdevs_operational": 4, 00:07:54.597 "base_bdevs_list": [ 00:07:54.597 { 00:07:54.597 "name": "BaseBdev1", 00:07:54.597 "uuid": "cbd8c012-be70-4f42-98fc-0f722064b13b", 00:07:54.597 "is_configured": true, 00:07:54.597 "data_offset": 0, 00:07:54.597 "data_size": 65536 00:07:54.597 }, 00:07:54.597 { 00:07:54.597 "name": null, 00:07:54.597 "uuid": "eff5c0c1-83a5-4b3b-b69d-d298f196b5e1", 00:07:54.597 "is_configured": false, 00:07:54.597 "data_offset": 0, 00:07:54.597 "data_size": 65536 00:07:54.597 }, 00:07:54.597 { 00:07:54.597 "name": "BaseBdev3", 00:07:54.597 "uuid": "73e38601-2ba8-472c-822d-b6c004cf6597", 00:07:54.597 "is_configured": true, 00:07:54.597 "data_offset": 0, 00:07:54.597 "data_size": 65536 00:07:54.597 }, 00:07:54.597 { 00:07:54.597 "name": "BaseBdev4", 00:07:54.597 "uuid": "b3022295-7f12-4fd4-b619-586216ddbfd1", 00:07:54.597 "is_configured": true, 00:07:54.597 "data_offset": 0, 00:07:54.597 "data_size": 65536 00:07:54.597 } 00:07:54.597 ] 00:07:54.597 }' 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.597 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.856 [2024-11-26 19:48:45.769004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.856 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.857 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.857 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.115 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.115 "name": "Existed_Raid", 00:07:55.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.115 "strip_size_kb": 64, 00:07:55.115 "state": "configuring", 00:07:55.115 "raid_level": "raid0", 00:07:55.115 "superblock": false, 00:07:55.115 "num_base_bdevs": 4, 00:07:55.115 "num_base_bdevs_discovered": 2, 00:07:55.115 "num_base_bdevs_operational": 4, 00:07:55.115 "base_bdevs_list": [ 00:07:55.115 { 00:07:55.115 "name": "BaseBdev1", 00:07:55.115 "uuid": "cbd8c012-be70-4f42-98fc-0f722064b13b", 00:07:55.115 "is_configured": true, 00:07:55.115 "data_offset": 0, 00:07:55.115 "data_size": 65536 00:07:55.115 }, 00:07:55.115 { 00:07:55.115 "name": null, 00:07:55.115 "uuid": "eff5c0c1-83a5-4b3b-b69d-d298f196b5e1", 00:07:55.115 "is_configured": false, 00:07:55.115 "data_offset": 0, 00:07:55.115 "data_size": 65536 00:07:55.115 }, 00:07:55.115 { 00:07:55.115 "name": null, 00:07:55.115 "uuid": "73e38601-2ba8-472c-822d-b6c004cf6597", 00:07:55.115 "is_configured": false, 00:07:55.115 "data_offset": 0, 00:07:55.115 "data_size": 65536 00:07:55.115 }, 00:07:55.115 { 00:07:55.116 "name": "BaseBdev4", 00:07:55.116 "uuid": "b3022295-7f12-4fd4-b619-586216ddbfd1", 00:07:55.116 "is_configured": true, 00:07:55.116 "data_offset": 0, 00:07:55.116 "data_size": 65536 00:07:55.116 } 00:07:55.116 ] 00:07:55.116 }' 00:07:55.116 19:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.116 19:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.375 [2024-11-26 19:48:46.121059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.375 "name": "Existed_Raid", 00:07:55.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.375 "strip_size_kb": 64, 00:07:55.375 "state": "configuring", 00:07:55.375 "raid_level": "raid0", 00:07:55.375 "superblock": false, 00:07:55.375 "num_base_bdevs": 4, 00:07:55.375 "num_base_bdevs_discovered": 3, 00:07:55.375 "num_base_bdevs_operational": 4, 00:07:55.375 "base_bdevs_list": [ 00:07:55.375 { 00:07:55.375 "name": "BaseBdev1", 00:07:55.375 "uuid": "cbd8c012-be70-4f42-98fc-0f722064b13b", 00:07:55.375 "is_configured": true, 00:07:55.375 "data_offset": 0, 00:07:55.375 "data_size": 65536 00:07:55.375 }, 00:07:55.375 { 00:07:55.375 "name": null, 00:07:55.375 "uuid": "eff5c0c1-83a5-4b3b-b69d-d298f196b5e1", 00:07:55.375 "is_configured": false, 00:07:55.375 "data_offset": 0, 00:07:55.375 "data_size": 65536 00:07:55.375 }, 00:07:55.375 { 00:07:55.375 "name": "BaseBdev3", 00:07:55.375 "uuid": "73e38601-2ba8-472c-822d-b6c004cf6597", 00:07:55.375 "is_configured": true, 00:07:55.375 "data_offset": 0, 00:07:55.375 "data_size": 65536 00:07:55.375 }, 00:07:55.375 { 00:07:55.375 "name": "BaseBdev4", 00:07:55.375 "uuid": "b3022295-7f12-4fd4-b619-586216ddbfd1", 00:07:55.375 "is_configured": true, 00:07:55.375 "data_offset": 0, 00:07:55.375 "data_size": 65536 00:07:55.375 } 00:07:55.375 ] 00:07:55.375 }' 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.375 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.633 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:55.633 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.633 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.633 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.633 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.633 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:55.633 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:55.633 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.633 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.633 [2024-11-26 19:48:46.521182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.891 "name": "Existed_Raid", 00:07:55.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.891 "strip_size_kb": 64, 00:07:55.891 "state": "configuring", 00:07:55.891 "raid_level": "raid0", 00:07:55.891 "superblock": false, 00:07:55.891 "num_base_bdevs": 4, 00:07:55.891 "num_base_bdevs_discovered": 2, 00:07:55.891 "num_base_bdevs_operational": 4, 00:07:55.891 "base_bdevs_list": [ 00:07:55.891 { 00:07:55.891 "name": null, 00:07:55.891 "uuid": "cbd8c012-be70-4f42-98fc-0f722064b13b", 00:07:55.891 "is_configured": false, 00:07:55.891 "data_offset": 0, 00:07:55.891 "data_size": 65536 00:07:55.891 }, 00:07:55.891 { 00:07:55.891 "name": null, 00:07:55.891 "uuid": "eff5c0c1-83a5-4b3b-b69d-d298f196b5e1", 00:07:55.891 "is_configured": false, 00:07:55.891 "data_offset": 0, 00:07:55.891 "data_size": 65536 00:07:55.891 }, 00:07:55.891 { 00:07:55.891 "name": "BaseBdev3", 00:07:55.891 "uuid": "73e38601-2ba8-472c-822d-b6c004cf6597", 00:07:55.891 "is_configured": true, 00:07:55.891 "data_offset": 0, 00:07:55.891 "data_size": 65536 00:07:55.891 }, 00:07:55.891 { 00:07:55.891 "name": "BaseBdev4", 00:07:55.891 "uuid": "b3022295-7f12-4fd4-b619-586216ddbfd1", 00:07:55.891 "is_configured": true, 00:07:55.891 "data_offset": 0, 00:07:55.891 "data_size": 65536 00:07:55.891 } 00:07:55.891 ] 00:07:55.891 }' 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.891 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.148 [2024-11-26 19:48:46.967480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.148 19:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.148 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.148 "name": "Existed_Raid", 00:07:56.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.148 "strip_size_kb": 64, 00:07:56.148 "state": "configuring", 00:07:56.148 "raid_level": "raid0", 00:07:56.148 "superblock": false, 00:07:56.148 "num_base_bdevs": 4, 00:07:56.148 "num_base_bdevs_discovered": 3, 00:07:56.148 "num_base_bdevs_operational": 4, 00:07:56.148 "base_bdevs_list": [ 00:07:56.148 { 00:07:56.148 "name": null, 00:07:56.148 "uuid": "cbd8c012-be70-4f42-98fc-0f722064b13b", 00:07:56.148 "is_configured": false, 00:07:56.148 "data_offset": 0, 00:07:56.148 "data_size": 65536 00:07:56.148 }, 00:07:56.148 { 00:07:56.148 "name": "BaseBdev2", 00:07:56.148 "uuid": "eff5c0c1-83a5-4b3b-b69d-d298f196b5e1", 00:07:56.148 "is_configured": true, 00:07:56.148 "data_offset": 0, 00:07:56.148 "data_size": 65536 00:07:56.148 }, 00:07:56.148 { 00:07:56.148 "name": "BaseBdev3", 00:07:56.149 "uuid": "73e38601-2ba8-472c-822d-b6c004cf6597", 00:07:56.149 "is_configured": true, 00:07:56.149 "data_offset": 0, 00:07:56.149 "data_size": 65536 00:07:56.149 }, 00:07:56.149 { 00:07:56.149 "name": "BaseBdev4", 00:07:56.149 "uuid": "b3022295-7f12-4fd4-b619-586216ddbfd1", 00:07:56.149 "is_configured": true, 00:07:56.149 "data_offset": 0, 00:07:56.149 "data_size": 65536 00:07:56.149 } 00:07:56.149 ] 00:07:56.149 }' 00:07:56.149 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.149 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.405 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.405 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.405 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.405 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:56.405 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cbd8c012-be70-4f42-98fc-0f722064b13b 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.664 [2024-11-26 19:48:47.407807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:56.664 [2024-11-26 19:48:47.407850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:56.664 [2024-11-26 19:48:47.407857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:07:56.664 [2024-11-26 19:48:47.408085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:56.664 [2024-11-26 19:48:47.408192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:56.664 [2024-11-26 19:48:47.408200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:56.664 [2024-11-26 19:48:47.408422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.664 NewBaseBdev 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.664 [ 00:07:56.664 { 00:07:56.664 "name": "NewBaseBdev", 00:07:56.664 "aliases": [ 00:07:56.664 "cbd8c012-be70-4f42-98fc-0f722064b13b" 00:07:56.664 ], 00:07:56.664 "product_name": "Malloc disk", 00:07:56.664 "block_size": 512, 00:07:56.664 "num_blocks": 65536, 00:07:56.664 "uuid": "cbd8c012-be70-4f42-98fc-0f722064b13b", 00:07:56.664 "assigned_rate_limits": { 00:07:56.664 "rw_ios_per_sec": 0, 00:07:56.664 "rw_mbytes_per_sec": 0, 00:07:56.664 "r_mbytes_per_sec": 0, 00:07:56.664 "w_mbytes_per_sec": 0 00:07:56.664 }, 00:07:56.664 "claimed": true, 00:07:56.664 "claim_type": "exclusive_write", 00:07:56.664 "zoned": false, 00:07:56.664 "supported_io_types": { 00:07:56.664 "read": true, 00:07:56.664 "write": true, 00:07:56.664 "unmap": true, 00:07:56.664 "flush": true, 00:07:56.664 "reset": true, 00:07:56.664 "nvme_admin": false, 00:07:56.664 "nvme_io": false, 00:07:56.664 "nvme_io_md": false, 00:07:56.664 "write_zeroes": true, 00:07:56.664 "zcopy": true, 00:07:56.664 "get_zone_info": false, 00:07:56.664 "zone_management": false, 00:07:56.664 "zone_append": false, 00:07:56.664 "compare": false, 00:07:56.664 "compare_and_write": false, 00:07:56.664 "abort": true, 00:07:56.664 "seek_hole": false, 00:07:56.664 "seek_data": false, 00:07:56.664 "copy": true, 00:07:56.664 "nvme_iov_md": false 00:07:56.664 }, 00:07:56.664 "memory_domains": [ 00:07:56.664 { 00:07:56.664 "dma_device_id": "system", 00:07:56.664 "dma_device_type": 1 00:07:56.664 }, 00:07:56.664 { 00:07:56.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.664 "dma_device_type": 2 00:07:56.664 } 00:07:56.664 ], 00:07:56.664 "driver_specific": {} 00:07:56.664 } 00:07:56.664 ] 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.664 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.664 "name": "Existed_Raid", 00:07:56.664 "uuid": "4ee24140-66f4-4628-a1d5-864c55d92f18", 00:07:56.664 "strip_size_kb": 64, 00:07:56.664 "state": "online", 00:07:56.664 "raid_level": "raid0", 00:07:56.664 "superblock": false, 00:07:56.664 "num_base_bdevs": 4, 00:07:56.664 "num_base_bdevs_discovered": 4, 00:07:56.664 "num_base_bdevs_operational": 4, 00:07:56.664 "base_bdevs_list": [ 00:07:56.664 { 00:07:56.664 "name": "NewBaseBdev", 00:07:56.664 "uuid": "cbd8c012-be70-4f42-98fc-0f722064b13b", 00:07:56.664 "is_configured": true, 00:07:56.664 "data_offset": 0, 00:07:56.664 "data_size": 65536 00:07:56.664 }, 00:07:56.664 { 00:07:56.664 "name": "BaseBdev2", 00:07:56.664 "uuid": "eff5c0c1-83a5-4b3b-b69d-d298f196b5e1", 00:07:56.665 "is_configured": true, 00:07:56.665 "data_offset": 0, 00:07:56.665 "data_size": 65536 00:07:56.665 }, 00:07:56.665 { 00:07:56.665 "name": "BaseBdev3", 00:07:56.665 "uuid": "73e38601-2ba8-472c-822d-b6c004cf6597", 00:07:56.665 "is_configured": true, 00:07:56.665 "data_offset": 0, 00:07:56.665 "data_size": 65536 00:07:56.665 }, 00:07:56.665 { 00:07:56.665 "name": "BaseBdev4", 00:07:56.665 "uuid": "b3022295-7f12-4fd4-b619-586216ddbfd1", 00:07:56.665 "is_configured": true, 00:07:56.665 "data_offset": 0, 00:07:56.665 "data_size": 65536 00:07:56.665 } 00:07:56.665 ] 00:07:56.665 }' 00:07:56.665 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.665 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.923 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:56.923 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:56.923 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.923 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.923 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.923 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.923 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.923 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:56.923 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.923 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.923 [2024-11-26 19:48:47.776255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.923 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.923 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.923 "name": "Existed_Raid", 00:07:56.923 "aliases": [ 00:07:56.923 "4ee24140-66f4-4628-a1d5-864c55d92f18" 00:07:56.923 ], 00:07:56.923 "product_name": "Raid Volume", 00:07:56.923 "block_size": 512, 00:07:56.923 "num_blocks": 262144, 00:07:56.923 "uuid": "4ee24140-66f4-4628-a1d5-864c55d92f18", 00:07:56.923 "assigned_rate_limits": { 00:07:56.923 "rw_ios_per_sec": 0, 00:07:56.923 "rw_mbytes_per_sec": 0, 00:07:56.923 "r_mbytes_per_sec": 0, 00:07:56.923 "w_mbytes_per_sec": 0 00:07:56.923 }, 00:07:56.923 "claimed": false, 00:07:56.923 "zoned": false, 00:07:56.923 "supported_io_types": { 00:07:56.923 "read": true, 00:07:56.923 "write": true, 00:07:56.923 "unmap": true, 00:07:56.923 "flush": true, 00:07:56.923 "reset": true, 00:07:56.923 "nvme_admin": false, 00:07:56.923 "nvme_io": false, 00:07:56.923 "nvme_io_md": false, 00:07:56.923 "write_zeroes": true, 00:07:56.923 "zcopy": false, 00:07:56.923 "get_zone_info": false, 00:07:56.923 "zone_management": false, 00:07:56.923 "zone_append": false, 00:07:56.923 "compare": false, 00:07:56.923 "compare_and_write": false, 00:07:56.923 "abort": false, 00:07:56.923 "seek_hole": false, 00:07:56.923 "seek_data": false, 00:07:56.923 "copy": false, 00:07:56.923 "nvme_iov_md": false 00:07:56.923 }, 00:07:56.923 "memory_domains": [ 00:07:56.923 { 00:07:56.923 "dma_device_id": "system", 00:07:56.923 "dma_device_type": 1 00:07:56.923 }, 00:07:56.923 { 00:07:56.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.923 "dma_device_type": 2 00:07:56.923 }, 00:07:56.923 { 00:07:56.923 "dma_device_id": "system", 00:07:56.923 "dma_device_type": 1 00:07:56.923 }, 00:07:56.923 { 00:07:56.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.923 "dma_device_type": 2 00:07:56.923 }, 00:07:56.923 { 00:07:56.923 "dma_device_id": "system", 00:07:56.923 "dma_device_type": 1 00:07:56.923 }, 00:07:56.923 { 00:07:56.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.923 "dma_device_type": 2 00:07:56.923 }, 00:07:56.923 { 00:07:56.923 "dma_device_id": "system", 00:07:56.923 "dma_device_type": 1 00:07:56.923 }, 00:07:56.923 { 00:07:56.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.923 "dma_device_type": 2 00:07:56.923 } 00:07:56.923 ], 00:07:56.923 "driver_specific": { 00:07:56.923 "raid": { 00:07:56.923 "uuid": "4ee24140-66f4-4628-a1d5-864c55d92f18", 00:07:56.923 "strip_size_kb": 64, 00:07:56.923 "state": "online", 00:07:56.923 "raid_level": "raid0", 00:07:56.923 "superblock": false, 00:07:56.923 "num_base_bdevs": 4, 00:07:56.923 "num_base_bdevs_discovered": 4, 00:07:56.923 "num_base_bdevs_operational": 4, 00:07:56.923 "base_bdevs_list": [ 00:07:56.923 { 00:07:56.923 "name": "NewBaseBdev", 00:07:56.923 "uuid": "cbd8c012-be70-4f42-98fc-0f722064b13b", 00:07:56.923 "is_configured": true, 00:07:56.923 "data_offset": 0, 00:07:56.923 "data_size": 65536 00:07:56.923 }, 00:07:56.923 { 00:07:56.923 "name": "BaseBdev2", 00:07:56.923 "uuid": "eff5c0c1-83a5-4b3b-b69d-d298f196b5e1", 00:07:56.923 "is_configured": true, 00:07:56.923 "data_offset": 0, 00:07:56.923 "data_size": 65536 00:07:56.923 }, 00:07:56.923 { 00:07:56.923 "name": "BaseBdev3", 00:07:56.923 "uuid": "73e38601-2ba8-472c-822d-b6c004cf6597", 00:07:56.923 "is_configured": true, 00:07:56.923 "data_offset": 0, 00:07:56.923 "data_size": 65536 00:07:56.923 }, 00:07:56.923 { 00:07:56.923 "name": "BaseBdev4", 00:07:56.923 "uuid": "b3022295-7f12-4fd4-b619-586216ddbfd1", 00:07:56.923 "is_configured": true, 00:07:56.923 "data_offset": 0, 00:07:56.923 "data_size": 65536 00:07:56.923 } 00:07:56.923 ] 00:07:56.923 } 00:07:56.924 } 00:07:56.924 }' 00:07:56.924 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.924 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:56.924 BaseBdev2 00:07:56.924 BaseBdev3 00:07:56.924 BaseBdev4' 00:07:56.924 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.181 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.181 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.181 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:57.181 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.181 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.181 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.181 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.181 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.181 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.181 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.181 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.182 19:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.182 [2024-11-26 19:48:47.999966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.182 [2024-11-26 19:48:47.999997] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.182 [2024-11-26 19:48:48.000075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.182 [2024-11-26 19:48:48.000140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.182 [2024-11-26 19:48:48.000149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:57.182 19:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.182 19:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67675 00:07:57.182 19:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67675 ']' 00:07:57.182 19:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67675 00:07:57.182 19:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:57.182 19:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.182 19:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67675 00:07:57.182 killing process with pid 67675 00:07:57.182 19:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.182 19:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.182 19:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67675' 00:07:57.182 19:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67675 00:07:57.182 [2024-11-26 19:48:48.031905] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.182 19:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67675 00:07:57.439 [2024-11-26 19:48:48.240438] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:58.004 00:07:58.004 real 0m8.376s 00:07:58.004 user 0m13.450s 00:07:58.004 sys 0m1.476s 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.004 ************************************ 00:07:58.004 END TEST raid_state_function_test 00:07:58.004 ************************************ 00:07:58.004 19:48:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:07:58.004 19:48:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:58.004 19:48:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.004 19:48:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.004 ************************************ 00:07:58.004 START TEST raid_state_function_test_sb 00:07:58.004 ************************************ 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:58.004 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:58.005 Process raid pid: 68313 00:07:58.005 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68313 00:07:58.005 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68313' 00:07:58.005 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68313 00:07:58.005 19:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68313 ']' 00:07:58.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.005 19:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.005 19:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.005 19:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.005 19:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.005 19:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:58.005 19:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.298 [2024-11-26 19:48:48.975716] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:58.298 [2024-11-26 19:48:48.975831] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.298 [2024-11-26 19:48:49.129922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.586 [2024-11-26 19:48:49.228884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.586 [2024-11-26 19:48:49.350679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.586 [2024-11-26 19:48:49.350710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.843 19:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.844 19:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:58.844 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:07:58.844 19:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.844 19:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.101 [2024-11-26 19:48:49.783710] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:59.101 [2024-11-26 19:48:49.783766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:59.101 [2024-11-26 19:48:49.783776] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:59.101 [2024-11-26 19:48:49.783785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:59.101 [2024-11-26 19:48:49.783790] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:59.101 [2024-11-26 19:48:49.783798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:59.101 [2024-11-26 19:48:49.783803] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:07:59.101 [2024-11-26 19:48:49.783810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.101 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.101 "name": "Existed_Raid", 00:07:59.101 "uuid": "dfa4a6ce-b234-482c-87f7-281896bc33b7", 00:07:59.101 "strip_size_kb": 64, 00:07:59.101 "state": "configuring", 00:07:59.101 "raid_level": "raid0", 00:07:59.101 "superblock": true, 00:07:59.101 "num_base_bdevs": 4, 00:07:59.102 "num_base_bdevs_discovered": 0, 00:07:59.102 "num_base_bdevs_operational": 4, 00:07:59.102 "base_bdevs_list": [ 00:07:59.102 { 00:07:59.102 "name": "BaseBdev1", 00:07:59.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.102 "is_configured": false, 00:07:59.102 "data_offset": 0, 00:07:59.102 "data_size": 0 00:07:59.102 }, 00:07:59.102 { 00:07:59.102 "name": "BaseBdev2", 00:07:59.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.102 "is_configured": false, 00:07:59.102 "data_offset": 0, 00:07:59.102 "data_size": 0 00:07:59.102 }, 00:07:59.102 { 00:07:59.102 "name": "BaseBdev3", 00:07:59.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.102 "is_configured": false, 00:07:59.102 "data_offset": 0, 00:07:59.102 "data_size": 0 00:07:59.102 }, 00:07:59.102 { 00:07:59.102 "name": "BaseBdev4", 00:07:59.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.102 "is_configured": false, 00:07:59.102 "data_offset": 0, 00:07:59.102 "data_size": 0 00:07:59.102 } 00:07:59.102 ] 00:07:59.102 }' 00:07:59.102 19:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.102 19:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.360 [2024-11-26 19:48:50.099706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:59.360 [2024-11-26 19:48:50.099743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.360 [2024-11-26 19:48:50.107697] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:59.360 [2024-11-26 19:48:50.107733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:59.360 [2024-11-26 19:48:50.107741] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:59.360 [2024-11-26 19:48:50.107749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:59.360 [2024-11-26 19:48:50.107754] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:59.360 [2024-11-26 19:48:50.107762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:59.360 [2024-11-26 19:48:50.107768] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:07:59.360 [2024-11-26 19:48:50.107776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.360 [2024-11-26 19:48:50.137975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.360 BaseBdev1 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.360 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.360 [ 00:07:59.360 { 00:07:59.360 "name": "BaseBdev1", 00:07:59.360 "aliases": [ 00:07:59.360 "78918f3f-22f0-44b1-aa79-c7868cca4de0" 00:07:59.360 ], 00:07:59.360 "product_name": "Malloc disk", 00:07:59.360 "block_size": 512, 00:07:59.360 "num_blocks": 65536, 00:07:59.360 "uuid": "78918f3f-22f0-44b1-aa79-c7868cca4de0", 00:07:59.360 "assigned_rate_limits": { 00:07:59.360 "rw_ios_per_sec": 0, 00:07:59.360 "rw_mbytes_per_sec": 0, 00:07:59.360 "r_mbytes_per_sec": 0, 00:07:59.360 "w_mbytes_per_sec": 0 00:07:59.360 }, 00:07:59.360 "claimed": true, 00:07:59.360 "claim_type": "exclusive_write", 00:07:59.360 "zoned": false, 00:07:59.360 "supported_io_types": { 00:07:59.360 "read": true, 00:07:59.360 "write": true, 00:07:59.361 "unmap": true, 00:07:59.361 "flush": true, 00:07:59.361 "reset": true, 00:07:59.361 "nvme_admin": false, 00:07:59.361 "nvme_io": false, 00:07:59.361 "nvme_io_md": false, 00:07:59.361 "write_zeroes": true, 00:07:59.361 "zcopy": true, 00:07:59.361 "get_zone_info": false, 00:07:59.361 "zone_management": false, 00:07:59.361 "zone_append": false, 00:07:59.361 "compare": false, 00:07:59.361 "compare_and_write": false, 00:07:59.361 "abort": true, 00:07:59.361 "seek_hole": false, 00:07:59.361 "seek_data": false, 00:07:59.361 "copy": true, 00:07:59.361 "nvme_iov_md": false 00:07:59.361 }, 00:07:59.361 "memory_domains": [ 00:07:59.361 { 00:07:59.361 "dma_device_id": "system", 00:07:59.361 "dma_device_type": 1 00:07:59.361 }, 00:07:59.361 { 00:07:59.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.361 "dma_device_type": 2 00:07:59.361 } 00:07:59.361 ], 00:07:59.361 "driver_specific": {} 00:07:59.361 } 00:07:59.361 ] 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.361 "name": "Existed_Raid", 00:07:59.361 "uuid": "6045780c-4461-4a0d-ad63-3a1a0092f3e6", 00:07:59.361 "strip_size_kb": 64, 00:07:59.361 "state": "configuring", 00:07:59.361 "raid_level": "raid0", 00:07:59.361 "superblock": true, 00:07:59.361 "num_base_bdevs": 4, 00:07:59.361 "num_base_bdevs_discovered": 1, 00:07:59.361 "num_base_bdevs_operational": 4, 00:07:59.361 "base_bdevs_list": [ 00:07:59.361 { 00:07:59.361 "name": "BaseBdev1", 00:07:59.361 "uuid": "78918f3f-22f0-44b1-aa79-c7868cca4de0", 00:07:59.361 "is_configured": true, 00:07:59.361 "data_offset": 2048, 00:07:59.361 "data_size": 63488 00:07:59.361 }, 00:07:59.361 { 00:07:59.361 "name": "BaseBdev2", 00:07:59.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.361 "is_configured": false, 00:07:59.361 "data_offset": 0, 00:07:59.361 "data_size": 0 00:07:59.361 }, 00:07:59.361 { 00:07:59.361 "name": "BaseBdev3", 00:07:59.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.361 "is_configured": false, 00:07:59.361 "data_offset": 0, 00:07:59.361 "data_size": 0 00:07:59.361 }, 00:07:59.361 { 00:07:59.361 "name": "BaseBdev4", 00:07:59.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.361 "is_configured": false, 00:07:59.361 "data_offset": 0, 00:07:59.361 "data_size": 0 00:07:59.361 } 00:07:59.361 ] 00:07:59.361 }' 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.361 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.619 [2024-11-26 19:48:50.474091] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:59.619 [2024-11-26 19:48:50.474224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.619 [2024-11-26 19:48:50.482141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.619 [2024-11-26 19:48:50.483910] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:59.619 [2024-11-26 19:48:50.484019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:59.619 [2024-11-26 19:48:50.484089] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:59.619 [2024-11-26 19:48:50.484114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:59.619 [2024-11-26 19:48:50.484213] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:07:59.619 [2024-11-26 19:48:50.484245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.619 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.619 "name": "Existed_Raid", 00:07:59.619 "uuid": "bd01ed30-386d-4a2f-864a-cac3339048fb", 00:07:59.619 "strip_size_kb": 64, 00:07:59.619 "state": "configuring", 00:07:59.619 "raid_level": "raid0", 00:07:59.620 "superblock": true, 00:07:59.620 "num_base_bdevs": 4, 00:07:59.620 "num_base_bdevs_discovered": 1, 00:07:59.620 "num_base_bdevs_operational": 4, 00:07:59.620 "base_bdevs_list": [ 00:07:59.620 { 00:07:59.620 "name": "BaseBdev1", 00:07:59.620 "uuid": "78918f3f-22f0-44b1-aa79-c7868cca4de0", 00:07:59.620 "is_configured": true, 00:07:59.620 "data_offset": 2048, 00:07:59.620 "data_size": 63488 00:07:59.620 }, 00:07:59.620 { 00:07:59.620 "name": "BaseBdev2", 00:07:59.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.620 "is_configured": false, 00:07:59.620 "data_offset": 0, 00:07:59.620 "data_size": 0 00:07:59.620 }, 00:07:59.620 { 00:07:59.620 "name": "BaseBdev3", 00:07:59.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.620 "is_configured": false, 00:07:59.620 "data_offset": 0, 00:07:59.620 "data_size": 0 00:07:59.620 }, 00:07:59.620 { 00:07:59.620 "name": "BaseBdev4", 00:07:59.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.620 "is_configured": false, 00:07:59.620 "data_offset": 0, 00:07:59.620 "data_size": 0 00:07:59.620 } 00:07:59.620 ] 00:07:59.620 }' 00:07:59.620 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.620 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.877 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:59.877 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.877 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.135 [2024-11-26 19:48:50.835253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.135 BaseBdev2 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.135 [ 00:08:00.135 { 00:08:00.135 "name": "BaseBdev2", 00:08:00.135 "aliases": [ 00:08:00.135 "25dc2ef4-6bb7-4357-91df-c459a7ce12c8" 00:08:00.135 ], 00:08:00.135 "product_name": "Malloc disk", 00:08:00.135 "block_size": 512, 00:08:00.135 "num_blocks": 65536, 00:08:00.135 "uuid": "25dc2ef4-6bb7-4357-91df-c459a7ce12c8", 00:08:00.135 "assigned_rate_limits": { 00:08:00.135 "rw_ios_per_sec": 0, 00:08:00.135 "rw_mbytes_per_sec": 0, 00:08:00.135 "r_mbytes_per_sec": 0, 00:08:00.135 "w_mbytes_per_sec": 0 00:08:00.135 }, 00:08:00.135 "claimed": true, 00:08:00.135 "claim_type": "exclusive_write", 00:08:00.135 "zoned": false, 00:08:00.135 "supported_io_types": { 00:08:00.135 "read": true, 00:08:00.135 "write": true, 00:08:00.135 "unmap": true, 00:08:00.135 "flush": true, 00:08:00.135 "reset": true, 00:08:00.135 "nvme_admin": false, 00:08:00.135 "nvme_io": false, 00:08:00.135 "nvme_io_md": false, 00:08:00.135 "write_zeroes": true, 00:08:00.135 "zcopy": true, 00:08:00.135 "get_zone_info": false, 00:08:00.135 "zone_management": false, 00:08:00.135 "zone_append": false, 00:08:00.135 "compare": false, 00:08:00.135 "compare_and_write": false, 00:08:00.135 "abort": true, 00:08:00.135 "seek_hole": false, 00:08:00.135 "seek_data": false, 00:08:00.135 "copy": true, 00:08:00.135 "nvme_iov_md": false 00:08:00.135 }, 00:08:00.135 "memory_domains": [ 00:08:00.135 { 00:08:00.135 "dma_device_id": "system", 00:08:00.135 "dma_device_type": 1 00:08:00.135 }, 00:08:00.135 { 00:08:00.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.135 "dma_device_type": 2 00:08:00.135 } 00:08:00.135 ], 00:08:00.135 "driver_specific": {} 00:08:00.135 } 00:08:00.135 ] 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.135 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:00.136 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.136 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.136 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.136 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.136 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.136 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.136 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.136 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.136 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.136 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.136 "name": "Existed_Raid", 00:08:00.136 "uuid": "bd01ed30-386d-4a2f-864a-cac3339048fb", 00:08:00.136 "strip_size_kb": 64, 00:08:00.136 "state": "configuring", 00:08:00.136 "raid_level": "raid0", 00:08:00.136 "superblock": true, 00:08:00.136 "num_base_bdevs": 4, 00:08:00.136 "num_base_bdevs_discovered": 2, 00:08:00.136 "num_base_bdevs_operational": 4, 00:08:00.136 "base_bdevs_list": [ 00:08:00.136 { 00:08:00.136 "name": "BaseBdev1", 00:08:00.136 "uuid": "78918f3f-22f0-44b1-aa79-c7868cca4de0", 00:08:00.136 "is_configured": true, 00:08:00.136 "data_offset": 2048, 00:08:00.136 "data_size": 63488 00:08:00.136 }, 00:08:00.136 { 00:08:00.136 "name": "BaseBdev2", 00:08:00.136 "uuid": "25dc2ef4-6bb7-4357-91df-c459a7ce12c8", 00:08:00.136 "is_configured": true, 00:08:00.136 "data_offset": 2048, 00:08:00.136 "data_size": 63488 00:08:00.136 }, 00:08:00.136 { 00:08:00.136 "name": "BaseBdev3", 00:08:00.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.136 "is_configured": false, 00:08:00.136 "data_offset": 0, 00:08:00.136 "data_size": 0 00:08:00.136 }, 00:08:00.136 { 00:08:00.136 "name": "BaseBdev4", 00:08:00.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.136 "is_configured": false, 00:08:00.136 "data_offset": 0, 00:08:00.136 "data_size": 0 00:08:00.136 } 00:08:00.136 ] 00:08:00.136 }' 00:08:00.136 19:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.136 19:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.394 [2024-11-26 19:48:51.224404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:00.394 BaseBdev3 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.394 [ 00:08:00.394 { 00:08:00.394 "name": "BaseBdev3", 00:08:00.394 "aliases": [ 00:08:00.394 "ccebe76b-1ee0-4b38-b60c-b8e75064f070" 00:08:00.394 ], 00:08:00.394 "product_name": "Malloc disk", 00:08:00.394 "block_size": 512, 00:08:00.394 "num_blocks": 65536, 00:08:00.394 "uuid": "ccebe76b-1ee0-4b38-b60c-b8e75064f070", 00:08:00.394 "assigned_rate_limits": { 00:08:00.394 "rw_ios_per_sec": 0, 00:08:00.394 "rw_mbytes_per_sec": 0, 00:08:00.394 "r_mbytes_per_sec": 0, 00:08:00.394 "w_mbytes_per_sec": 0 00:08:00.394 }, 00:08:00.394 "claimed": true, 00:08:00.394 "claim_type": "exclusive_write", 00:08:00.394 "zoned": false, 00:08:00.394 "supported_io_types": { 00:08:00.394 "read": true, 00:08:00.394 "write": true, 00:08:00.394 "unmap": true, 00:08:00.394 "flush": true, 00:08:00.394 "reset": true, 00:08:00.394 "nvme_admin": false, 00:08:00.394 "nvme_io": false, 00:08:00.394 "nvme_io_md": false, 00:08:00.394 "write_zeroes": true, 00:08:00.394 "zcopy": true, 00:08:00.394 "get_zone_info": false, 00:08:00.394 "zone_management": false, 00:08:00.394 "zone_append": false, 00:08:00.394 "compare": false, 00:08:00.394 "compare_and_write": false, 00:08:00.394 "abort": true, 00:08:00.394 "seek_hole": false, 00:08:00.394 "seek_data": false, 00:08:00.394 "copy": true, 00:08:00.394 "nvme_iov_md": false 00:08:00.394 }, 00:08:00.394 "memory_domains": [ 00:08:00.394 { 00:08:00.394 "dma_device_id": "system", 00:08:00.394 "dma_device_type": 1 00:08:00.394 }, 00:08:00.394 { 00:08:00.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.394 "dma_device_type": 2 00:08:00.394 } 00:08:00.394 ], 00:08:00.394 "driver_specific": {} 00:08:00.394 } 00:08:00.394 ] 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.394 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.394 "name": "Existed_Raid", 00:08:00.394 "uuid": "bd01ed30-386d-4a2f-864a-cac3339048fb", 00:08:00.394 "strip_size_kb": 64, 00:08:00.394 "state": "configuring", 00:08:00.394 "raid_level": "raid0", 00:08:00.394 "superblock": true, 00:08:00.394 "num_base_bdevs": 4, 00:08:00.394 "num_base_bdevs_discovered": 3, 00:08:00.394 "num_base_bdevs_operational": 4, 00:08:00.394 "base_bdevs_list": [ 00:08:00.394 { 00:08:00.394 "name": "BaseBdev1", 00:08:00.394 "uuid": "78918f3f-22f0-44b1-aa79-c7868cca4de0", 00:08:00.394 "is_configured": true, 00:08:00.394 "data_offset": 2048, 00:08:00.394 "data_size": 63488 00:08:00.394 }, 00:08:00.394 { 00:08:00.394 "name": "BaseBdev2", 00:08:00.394 "uuid": "25dc2ef4-6bb7-4357-91df-c459a7ce12c8", 00:08:00.394 "is_configured": true, 00:08:00.395 "data_offset": 2048, 00:08:00.395 "data_size": 63488 00:08:00.395 }, 00:08:00.395 { 00:08:00.395 "name": "BaseBdev3", 00:08:00.395 "uuid": "ccebe76b-1ee0-4b38-b60c-b8e75064f070", 00:08:00.395 "is_configured": true, 00:08:00.395 "data_offset": 2048, 00:08:00.395 "data_size": 63488 00:08:00.395 }, 00:08:00.395 { 00:08:00.395 "name": "BaseBdev4", 00:08:00.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.395 "is_configured": false, 00:08:00.395 "data_offset": 0, 00:08:00.395 "data_size": 0 00:08:00.395 } 00:08:00.395 ] 00:08:00.395 }' 00:08:00.395 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.395 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.653 [2024-11-26 19:48:51.573100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:00.653 [2024-11-26 19:48:51.573322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:00.653 [2024-11-26 19:48:51.573333] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:00.653 BaseBdev4 00:08:00.653 [2024-11-26 19:48:51.573597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:00.653 [2024-11-26 19:48:51.573716] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:00.653 [2024-11-26 19:48:51.573725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:00.653 [2024-11-26 19:48:51.573837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.653 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.911 [ 00:08:00.911 { 00:08:00.911 "name": "BaseBdev4", 00:08:00.911 "aliases": [ 00:08:00.911 "d949e2c5-b966-4e84-83b6-340952b77989" 00:08:00.911 ], 00:08:00.911 "product_name": "Malloc disk", 00:08:00.911 "block_size": 512, 00:08:00.911 "num_blocks": 65536, 00:08:00.911 "uuid": "d949e2c5-b966-4e84-83b6-340952b77989", 00:08:00.911 "assigned_rate_limits": { 00:08:00.911 "rw_ios_per_sec": 0, 00:08:00.911 "rw_mbytes_per_sec": 0, 00:08:00.911 "r_mbytes_per_sec": 0, 00:08:00.911 "w_mbytes_per_sec": 0 00:08:00.911 }, 00:08:00.911 "claimed": true, 00:08:00.911 "claim_type": "exclusive_write", 00:08:00.911 "zoned": false, 00:08:00.911 "supported_io_types": { 00:08:00.911 "read": true, 00:08:00.911 "write": true, 00:08:00.911 "unmap": true, 00:08:00.911 "flush": true, 00:08:00.911 "reset": true, 00:08:00.911 "nvme_admin": false, 00:08:00.911 "nvme_io": false, 00:08:00.911 "nvme_io_md": false, 00:08:00.911 "write_zeroes": true, 00:08:00.911 "zcopy": true, 00:08:00.911 "get_zone_info": false, 00:08:00.911 "zone_management": false, 00:08:00.911 "zone_append": false, 00:08:00.911 "compare": false, 00:08:00.911 "compare_and_write": false, 00:08:00.911 "abort": true, 00:08:00.911 "seek_hole": false, 00:08:00.911 "seek_data": false, 00:08:00.911 "copy": true, 00:08:00.911 "nvme_iov_md": false 00:08:00.911 }, 00:08:00.911 "memory_domains": [ 00:08:00.911 { 00:08:00.911 "dma_device_id": "system", 00:08:00.911 "dma_device_type": 1 00:08:00.911 }, 00:08:00.911 { 00:08:00.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.911 "dma_device_type": 2 00:08:00.911 } 00:08:00.911 ], 00:08:00.911 "driver_specific": {} 00:08:00.911 } 00:08:00.911 ] 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.911 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.911 "name": "Existed_Raid", 00:08:00.911 "uuid": "bd01ed30-386d-4a2f-864a-cac3339048fb", 00:08:00.911 "strip_size_kb": 64, 00:08:00.911 "state": "online", 00:08:00.911 "raid_level": "raid0", 00:08:00.911 "superblock": true, 00:08:00.911 "num_base_bdevs": 4, 00:08:00.911 "num_base_bdevs_discovered": 4, 00:08:00.911 "num_base_bdevs_operational": 4, 00:08:00.911 "base_bdevs_list": [ 00:08:00.911 { 00:08:00.911 "name": "BaseBdev1", 00:08:00.911 "uuid": "78918f3f-22f0-44b1-aa79-c7868cca4de0", 00:08:00.911 "is_configured": true, 00:08:00.911 "data_offset": 2048, 00:08:00.911 "data_size": 63488 00:08:00.911 }, 00:08:00.911 { 00:08:00.911 "name": "BaseBdev2", 00:08:00.911 "uuid": "25dc2ef4-6bb7-4357-91df-c459a7ce12c8", 00:08:00.911 "is_configured": true, 00:08:00.911 "data_offset": 2048, 00:08:00.911 "data_size": 63488 00:08:00.911 }, 00:08:00.911 { 00:08:00.911 "name": "BaseBdev3", 00:08:00.911 "uuid": "ccebe76b-1ee0-4b38-b60c-b8e75064f070", 00:08:00.911 "is_configured": true, 00:08:00.911 "data_offset": 2048, 00:08:00.911 "data_size": 63488 00:08:00.911 }, 00:08:00.911 { 00:08:00.911 "name": "BaseBdev4", 00:08:00.911 "uuid": "d949e2c5-b966-4e84-83b6-340952b77989", 00:08:00.911 "is_configured": true, 00:08:00.911 "data_offset": 2048, 00:08:00.911 "data_size": 63488 00:08:00.911 } 00:08:00.911 ] 00:08:00.912 }' 00:08:00.912 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.912 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.170 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:01.170 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:01.170 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:01.170 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:01.170 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:01.170 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:01.170 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:01.170 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.170 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.170 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:01.170 [2024-11-26 19:48:51.933557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.170 19:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.170 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:01.170 "name": "Existed_Raid", 00:08:01.170 "aliases": [ 00:08:01.170 "bd01ed30-386d-4a2f-864a-cac3339048fb" 00:08:01.170 ], 00:08:01.170 "product_name": "Raid Volume", 00:08:01.170 "block_size": 512, 00:08:01.170 "num_blocks": 253952, 00:08:01.170 "uuid": "bd01ed30-386d-4a2f-864a-cac3339048fb", 00:08:01.170 "assigned_rate_limits": { 00:08:01.170 "rw_ios_per_sec": 0, 00:08:01.170 "rw_mbytes_per_sec": 0, 00:08:01.170 "r_mbytes_per_sec": 0, 00:08:01.170 "w_mbytes_per_sec": 0 00:08:01.170 }, 00:08:01.170 "claimed": false, 00:08:01.170 "zoned": false, 00:08:01.170 "supported_io_types": { 00:08:01.170 "read": true, 00:08:01.170 "write": true, 00:08:01.170 "unmap": true, 00:08:01.171 "flush": true, 00:08:01.171 "reset": true, 00:08:01.171 "nvme_admin": false, 00:08:01.171 "nvme_io": false, 00:08:01.171 "nvme_io_md": false, 00:08:01.171 "write_zeroes": true, 00:08:01.171 "zcopy": false, 00:08:01.171 "get_zone_info": false, 00:08:01.171 "zone_management": false, 00:08:01.171 "zone_append": false, 00:08:01.171 "compare": false, 00:08:01.171 "compare_and_write": false, 00:08:01.171 "abort": false, 00:08:01.171 "seek_hole": false, 00:08:01.171 "seek_data": false, 00:08:01.171 "copy": false, 00:08:01.171 "nvme_iov_md": false 00:08:01.171 }, 00:08:01.171 "memory_domains": [ 00:08:01.171 { 00:08:01.171 "dma_device_id": "system", 00:08:01.171 "dma_device_type": 1 00:08:01.171 }, 00:08:01.171 { 00:08:01.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.171 "dma_device_type": 2 00:08:01.171 }, 00:08:01.171 { 00:08:01.171 "dma_device_id": "system", 00:08:01.171 "dma_device_type": 1 00:08:01.171 }, 00:08:01.171 { 00:08:01.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.171 "dma_device_type": 2 00:08:01.171 }, 00:08:01.171 { 00:08:01.171 "dma_device_id": "system", 00:08:01.171 "dma_device_type": 1 00:08:01.171 }, 00:08:01.171 { 00:08:01.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.171 "dma_device_type": 2 00:08:01.171 }, 00:08:01.171 { 00:08:01.171 "dma_device_id": "system", 00:08:01.171 "dma_device_type": 1 00:08:01.171 }, 00:08:01.171 { 00:08:01.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.171 "dma_device_type": 2 00:08:01.171 } 00:08:01.171 ], 00:08:01.171 "driver_specific": { 00:08:01.171 "raid": { 00:08:01.171 "uuid": "bd01ed30-386d-4a2f-864a-cac3339048fb", 00:08:01.171 "strip_size_kb": 64, 00:08:01.171 "state": "online", 00:08:01.171 "raid_level": "raid0", 00:08:01.171 "superblock": true, 00:08:01.171 "num_base_bdevs": 4, 00:08:01.171 "num_base_bdevs_discovered": 4, 00:08:01.171 "num_base_bdevs_operational": 4, 00:08:01.171 "base_bdevs_list": [ 00:08:01.171 { 00:08:01.171 "name": "BaseBdev1", 00:08:01.171 "uuid": "78918f3f-22f0-44b1-aa79-c7868cca4de0", 00:08:01.171 "is_configured": true, 00:08:01.171 "data_offset": 2048, 00:08:01.171 "data_size": 63488 00:08:01.171 }, 00:08:01.171 { 00:08:01.171 "name": "BaseBdev2", 00:08:01.171 "uuid": "25dc2ef4-6bb7-4357-91df-c459a7ce12c8", 00:08:01.171 "is_configured": true, 00:08:01.171 "data_offset": 2048, 00:08:01.171 "data_size": 63488 00:08:01.171 }, 00:08:01.171 { 00:08:01.171 "name": "BaseBdev3", 00:08:01.171 "uuid": "ccebe76b-1ee0-4b38-b60c-b8e75064f070", 00:08:01.171 "is_configured": true, 00:08:01.171 "data_offset": 2048, 00:08:01.171 "data_size": 63488 00:08:01.171 }, 00:08:01.171 { 00:08:01.171 "name": "BaseBdev4", 00:08:01.171 "uuid": "d949e2c5-b966-4e84-83b6-340952b77989", 00:08:01.171 "is_configured": true, 00:08:01.171 "data_offset": 2048, 00:08:01.171 "data_size": 63488 00:08:01.171 } 00:08:01.171 ] 00:08:01.171 } 00:08:01.171 } 00:08:01.171 }' 00:08:01.171 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:01.171 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:01.171 BaseBdev2 00:08:01.171 BaseBdev3 00:08:01.171 BaseBdev4' 00:08:01.171 19:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.171 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.429 [2024-11-26 19:48:52.169318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:01.429 [2024-11-26 19:48:52.169362] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.429 [2024-11-26 19:48:52.169410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.429 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.430 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.430 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.430 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.430 "name": "Existed_Raid", 00:08:01.430 "uuid": "bd01ed30-386d-4a2f-864a-cac3339048fb", 00:08:01.430 "strip_size_kb": 64, 00:08:01.430 "state": "offline", 00:08:01.430 "raid_level": "raid0", 00:08:01.430 "superblock": true, 00:08:01.430 "num_base_bdevs": 4, 00:08:01.430 "num_base_bdevs_discovered": 3, 00:08:01.430 "num_base_bdevs_operational": 3, 00:08:01.430 "base_bdevs_list": [ 00:08:01.430 { 00:08:01.430 "name": null, 00:08:01.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.430 "is_configured": false, 00:08:01.430 "data_offset": 0, 00:08:01.430 "data_size": 63488 00:08:01.430 }, 00:08:01.430 { 00:08:01.430 "name": "BaseBdev2", 00:08:01.430 "uuid": "25dc2ef4-6bb7-4357-91df-c459a7ce12c8", 00:08:01.430 "is_configured": true, 00:08:01.430 "data_offset": 2048, 00:08:01.430 "data_size": 63488 00:08:01.430 }, 00:08:01.430 { 00:08:01.430 "name": "BaseBdev3", 00:08:01.430 "uuid": "ccebe76b-1ee0-4b38-b60c-b8e75064f070", 00:08:01.430 "is_configured": true, 00:08:01.430 "data_offset": 2048, 00:08:01.430 "data_size": 63488 00:08:01.430 }, 00:08:01.430 { 00:08:01.430 "name": "BaseBdev4", 00:08:01.430 "uuid": "d949e2c5-b966-4e84-83b6-340952b77989", 00:08:01.430 "is_configured": true, 00:08:01.430 "data_offset": 2048, 00:08:01.430 "data_size": 63488 00:08:01.430 } 00:08:01.430 ] 00:08:01.430 }' 00:08:01.430 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.430 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.687 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:01.687 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:01.687 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.687 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.687 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.687 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:01.687 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.687 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:01.687 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:01.687 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:01.687 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.687 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.687 [2024-11-26 19:48:52.587407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.945 [2024-11-26 19:48:52.676935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.945 [2024-11-26 19:48:52.766493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:01.945 [2024-11-26 19:48:52.766536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.945 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.946 BaseBdev2 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.946 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.203 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.203 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:02.203 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.203 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.203 [ 00:08:02.203 { 00:08:02.203 "name": "BaseBdev2", 00:08:02.203 "aliases": [ 00:08:02.203 "1b934c14-919c-474e-9425-134c2d9d0180" 00:08:02.203 ], 00:08:02.203 "product_name": "Malloc disk", 00:08:02.203 "block_size": 512, 00:08:02.203 "num_blocks": 65536, 00:08:02.203 "uuid": "1b934c14-919c-474e-9425-134c2d9d0180", 00:08:02.203 "assigned_rate_limits": { 00:08:02.203 "rw_ios_per_sec": 0, 00:08:02.203 "rw_mbytes_per_sec": 0, 00:08:02.203 "r_mbytes_per_sec": 0, 00:08:02.204 "w_mbytes_per_sec": 0 00:08:02.204 }, 00:08:02.204 "claimed": false, 00:08:02.204 "zoned": false, 00:08:02.204 "supported_io_types": { 00:08:02.204 "read": true, 00:08:02.204 "write": true, 00:08:02.204 "unmap": true, 00:08:02.204 "flush": true, 00:08:02.204 "reset": true, 00:08:02.204 "nvme_admin": false, 00:08:02.204 "nvme_io": false, 00:08:02.204 "nvme_io_md": false, 00:08:02.204 "write_zeroes": true, 00:08:02.204 "zcopy": true, 00:08:02.204 "get_zone_info": false, 00:08:02.204 "zone_management": false, 00:08:02.204 "zone_append": false, 00:08:02.204 "compare": false, 00:08:02.204 "compare_and_write": false, 00:08:02.204 "abort": true, 00:08:02.204 "seek_hole": false, 00:08:02.204 "seek_data": false, 00:08:02.204 "copy": true, 00:08:02.204 "nvme_iov_md": false 00:08:02.204 }, 00:08:02.204 "memory_domains": [ 00:08:02.204 { 00:08:02.204 "dma_device_id": "system", 00:08:02.204 "dma_device_type": 1 00:08:02.204 }, 00:08:02.204 { 00:08:02.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.204 "dma_device_type": 2 00:08:02.204 } 00:08:02.204 ], 00:08:02.204 "driver_specific": {} 00:08:02.204 } 00:08:02.204 ] 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 BaseBdev3 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 [ 00:08:02.204 { 00:08:02.204 "name": "BaseBdev3", 00:08:02.204 "aliases": [ 00:08:02.204 "cbe7eea2-2abb-4e7e-b729-3ae3aa5f51e2" 00:08:02.204 ], 00:08:02.204 "product_name": "Malloc disk", 00:08:02.204 "block_size": 512, 00:08:02.204 "num_blocks": 65536, 00:08:02.204 "uuid": "cbe7eea2-2abb-4e7e-b729-3ae3aa5f51e2", 00:08:02.204 "assigned_rate_limits": { 00:08:02.204 "rw_ios_per_sec": 0, 00:08:02.204 "rw_mbytes_per_sec": 0, 00:08:02.204 "r_mbytes_per_sec": 0, 00:08:02.204 "w_mbytes_per_sec": 0 00:08:02.204 }, 00:08:02.204 "claimed": false, 00:08:02.204 "zoned": false, 00:08:02.204 "supported_io_types": { 00:08:02.204 "read": true, 00:08:02.204 "write": true, 00:08:02.204 "unmap": true, 00:08:02.204 "flush": true, 00:08:02.204 "reset": true, 00:08:02.204 "nvme_admin": false, 00:08:02.204 "nvme_io": false, 00:08:02.204 "nvme_io_md": false, 00:08:02.204 "write_zeroes": true, 00:08:02.204 "zcopy": true, 00:08:02.204 "get_zone_info": false, 00:08:02.204 "zone_management": false, 00:08:02.204 "zone_append": false, 00:08:02.204 "compare": false, 00:08:02.204 "compare_and_write": false, 00:08:02.204 "abort": true, 00:08:02.204 "seek_hole": false, 00:08:02.204 "seek_data": false, 00:08:02.204 "copy": true, 00:08:02.204 "nvme_iov_md": false 00:08:02.204 }, 00:08:02.204 "memory_domains": [ 00:08:02.204 { 00:08:02.204 "dma_device_id": "system", 00:08:02.204 "dma_device_type": 1 00:08:02.204 }, 00:08:02.204 { 00:08:02.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.204 "dma_device_type": 2 00:08:02.204 } 00:08:02.204 ], 00:08:02.204 "driver_specific": {} 00:08:02.204 } 00:08:02.204 ] 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 BaseBdev4 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.204 19:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 [ 00:08:02.204 { 00:08:02.204 "name": "BaseBdev4", 00:08:02.204 "aliases": [ 00:08:02.204 "dede84da-0436-4b32-b330-2c2c0601b9d4" 00:08:02.204 ], 00:08:02.204 "product_name": "Malloc disk", 00:08:02.204 "block_size": 512, 00:08:02.204 "num_blocks": 65536, 00:08:02.204 "uuid": "dede84da-0436-4b32-b330-2c2c0601b9d4", 00:08:02.204 "assigned_rate_limits": { 00:08:02.204 "rw_ios_per_sec": 0, 00:08:02.204 "rw_mbytes_per_sec": 0, 00:08:02.204 "r_mbytes_per_sec": 0, 00:08:02.204 "w_mbytes_per_sec": 0 00:08:02.204 }, 00:08:02.204 "claimed": false, 00:08:02.204 "zoned": false, 00:08:02.204 "supported_io_types": { 00:08:02.204 "read": true, 00:08:02.204 "write": true, 00:08:02.204 "unmap": true, 00:08:02.204 "flush": true, 00:08:02.204 "reset": true, 00:08:02.204 "nvme_admin": false, 00:08:02.204 "nvme_io": false, 00:08:02.204 "nvme_io_md": false, 00:08:02.204 "write_zeroes": true, 00:08:02.204 "zcopy": true, 00:08:02.204 "get_zone_info": false, 00:08:02.204 "zone_management": false, 00:08:02.204 "zone_append": false, 00:08:02.204 "compare": false, 00:08:02.204 "compare_and_write": false, 00:08:02.204 "abort": true, 00:08:02.204 "seek_hole": false, 00:08:02.204 "seek_data": false, 00:08:02.204 "copy": true, 00:08:02.204 "nvme_iov_md": false 00:08:02.204 }, 00:08:02.204 "memory_domains": [ 00:08:02.204 { 00:08:02.204 "dma_device_id": "system", 00:08:02.204 "dma_device_type": 1 00:08:02.204 }, 00:08:02.204 { 00:08:02.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.204 "dma_device_type": 2 00:08:02.204 } 00:08:02.204 ], 00:08:02.204 "driver_specific": {} 00:08:02.204 } 00:08:02.204 ] 00:08:02.204 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.204 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:02.204 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:02.204 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:02.204 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:02.204 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.204 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 [2024-11-26 19:48:53.017151] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:02.204 [2024-11-26 19:48:53.017325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:02.204 [2024-11-26 19:48:53.017411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.204 [2024-11-26 19:48:53.019144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:02.205 [2024-11-26 19:48:53.019268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.205 "name": "Existed_Raid", 00:08:02.205 "uuid": "0e1efeb2-470f-40e0-97fc-c4b2eea9beb9", 00:08:02.205 "strip_size_kb": 64, 00:08:02.205 "state": "configuring", 00:08:02.205 "raid_level": "raid0", 00:08:02.205 "superblock": true, 00:08:02.205 "num_base_bdevs": 4, 00:08:02.205 "num_base_bdevs_discovered": 3, 00:08:02.205 "num_base_bdevs_operational": 4, 00:08:02.205 "base_bdevs_list": [ 00:08:02.205 { 00:08:02.205 "name": "BaseBdev1", 00:08:02.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.205 "is_configured": false, 00:08:02.205 "data_offset": 0, 00:08:02.205 "data_size": 0 00:08:02.205 }, 00:08:02.205 { 00:08:02.205 "name": "BaseBdev2", 00:08:02.205 "uuid": "1b934c14-919c-474e-9425-134c2d9d0180", 00:08:02.205 "is_configured": true, 00:08:02.205 "data_offset": 2048, 00:08:02.205 "data_size": 63488 00:08:02.205 }, 00:08:02.205 { 00:08:02.205 "name": "BaseBdev3", 00:08:02.205 "uuid": "cbe7eea2-2abb-4e7e-b729-3ae3aa5f51e2", 00:08:02.205 "is_configured": true, 00:08:02.205 "data_offset": 2048, 00:08:02.205 "data_size": 63488 00:08:02.205 }, 00:08:02.205 { 00:08:02.205 "name": "BaseBdev4", 00:08:02.205 "uuid": "dede84da-0436-4b32-b330-2c2c0601b9d4", 00:08:02.205 "is_configured": true, 00:08:02.205 "data_offset": 2048, 00:08:02.205 "data_size": 63488 00:08:02.205 } 00:08:02.205 ] 00:08:02.205 }' 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.205 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.464 [2024-11-26 19:48:53.341230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.464 "name": "Existed_Raid", 00:08:02.464 "uuid": "0e1efeb2-470f-40e0-97fc-c4b2eea9beb9", 00:08:02.464 "strip_size_kb": 64, 00:08:02.464 "state": "configuring", 00:08:02.464 "raid_level": "raid0", 00:08:02.464 "superblock": true, 00:08:02.464 "num_base_bdevs": 4, 00:08:02.464 "num_base_bdevs_discovered": 2, 00:08:02.464 "num_base_bdevs_operational": 4, 00:08:02.464 "base_bdevs_list": [ 00:08:02.464 { 00:08:02.464 "name": "BaseBdev1", 00:08:02.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.464 "is_configured": false, 00:08:02.464 "data_offset": 0, 00:08:02.464 "data_size": 0 00:08:02.464 }, 00:08:02.464 { 00:08:02.464 "name": null, 00:08:02.464 "uuid": "1b934c14-919c-474e-9425-134c2d9d0180", 00:08:02.464 "is_configured": false, 00:08:02.464 "data_offset": 0, 00:08:02.464 "data_size": 63488 00:08:02.464 }, 00:08:02.464 { 00:08:02.464 "name": "BaseBdev3", 00:08:02.464 "uuid": "cbe7eea2-2abb-4e7e-b729-3ae3aa5f51e2", 00:08:02.464 "is_configured": true, 00:08:02.464 "data_offset": 2048, 00:08:02.464 "data_size": 63488 00:08:02.464 }, 00:08:02.464 { 00:08:02.464 "name": "BaseBdev4", 00:08:02.464 "uuid": "dede84da-0436-4b32-b330-2c2c0601b9d4", 00:08:02.464 "is_configured": true, 00:08:02.464 "data_offset": 2048, 00:08:02.464 "data_size": 63488 00:08:02.464 } 00:08:02.464 ] 00:08:02.464 }' 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.464 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.030 [2024-11-26 19:48:53.737513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.030 BaseBdev1 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.030 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.030 [ 00:08:03.030 { 00:08:03.030 "name": "BaseBdev1", 00:08:03.030 "aliases": [ 00:08:03.030 "3d830559-3e9d-4fce-aae1-984fb180fd14" 00:08:03.030 ], 00:08:03.030 "product_name": "Malloc disk", 00:08:03.030 "block_size": 512, 00:08:03.030 "num_blocks": 65536, 00:08:03.031 "uuid": "3d830559-3e9d-4fce-aae1-984fb180fd14", 00:08:03.031 "assigned_rate_limits": { 00:08:03.031 "rw_ios_per_sec": 0, 00:08:03.031 "rw_mbytes_per_sec": 0, 00:08:03.031 "r_mbytes_per_sec": 0, 00:08:03.031 "w_mbytes_per_sec": 0 00:08:03.031 }, 00:08:03.031 "claimed": true, 00:08:03.031 "claim_type": "exclusive_write", 00:08:03.031 "zoned": false, 00:08:03.031 "supported_io_types": { 00:08:03.031 "read": true, 00:08:03.031 "write": true, 00:08:03.031 "unmap": true, 00:08:03.031 "flush": true, 00:08:03.031 "reset": true, 00:08:03.031 "nvme_admin": false, 00:08:03.031 "nvme_io": false, 00:08:03.031 "nvme_io_md": false, 00:08:03.031 "write_zeroes": true, 00:08:03.031 "zcopy": true, 00:08:03.031 "get_zone_info": false, 00:08:03.031 "zone_management": false, 00:08:03.031 "zone_append": false, 00:08:03.031 "compare": false, 00:08:03.031 "compare_and_write": false, 00:08:03.031 "abort": true, 00:08:03.031 "seek_hole": false, 00:08:03.031 "seek_data": false, 00:08:03.031 "copy": true, 00:08:03.031 "nvme_iov_md": false 00:08:03.031 }, 00:08:03.031 "memory_domains": [ 00:08:03.031 { 00:08:03.031 "dma_device_id": "system", 00:08:03.031 "dma_device_type": 1 00:08:03.031 }, 00:08:03.031 { 00:08:03.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.031 "dma_device_type": 2 00:08:03.031 } 00:08:03.031 ], 00:08:03.031 "driver_specific": {} 00:08:03.031 } 00:08:03.031 ] 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.031 "name": "Existed_Raid", 00:08:03.031 "uuid": "0e1efeb2-470f-40e0-97fc-c4b2eea9beb9", 00:08:03.031 "strip_size_kb": 64, 00:08:03.031 "state": "configuring", 00:08:03.031 "raid_level": "raid0", 00:08:03.031 "superblock": true, 00:08:03.031 "num_base_bdevs": 4, 00:08:03.031 "num_base_bdevs_discovered": 3, 00:08:03.031 "num_base_bdevs_operational": 4, 00:08:03.031 "base_bdevs_list": [ 00:08:03.031 { 00:08:03.031 "name": "BaseBdev1", 00:08:03.031 "uuid": "3d830559-3e9d-4fce-aae1-984fb180fd14", 00:08:03.031 "is_configured": true, 00:08:03.031 "data_offset": 2048, 00:08:03.031 "data_size": 63488 00:08:03.031 }, 00:08:03.031 { 00:08:03.031 "name": null, 00:08:03.031 "uuid": "1b934c14-919c-474e-9425-134c2d9d0180", 00:08:03.031 "is_configured": false, 00:08:03.031 "data_offset": 0, 00:08:03.031 "data_size": 63488 00:08:03.031 }, 00:08:03.031 { 00:08:03.031 "name": "BaseBdev3", 00:08:03.031 "uuid": "cbe7eea2-2abb-4e7e-b729-3ae3aa5f51e2", 00:08:03.031 "is_configured": true, 00:08:03.031 "data_offset": 2048, 00:08:03.031 "data_size": 63488 00:08:03.031 }, 00:08:03.031 { 00:08:03.031 "name": "BaseBdev4", 00:08:03.031 "uuid": "dede84da-0436-4b32-b330-2c2c0601b9d4", 00:08:03.031 "is_configured": true, 00:08:03.031 "data_offset": 2048, 00:08:03.031 "data_size": 63488 00:08:03.031 } 00:08:03.031 ] 00:08:03.031 }' 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.031 19:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.288 [2024-11-26 19:48:54.105658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.288 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.288 "name": "Existed_Raid", 00:08:03.288 "uuid": "0e1efeb2-470f-40e0-97fc-c4b2eea9beb9", 00:08:03.288 "strip_size_kb": 64, 00:08:03.288 "state": "configuring", 00:08:03.288 "raid_level": "raid0", 00:08:03.288 "superblock": true, 00:08:03.288 "num_base_bdevs": 4, 00:08:03.288 "num_base_bdevs_discovered": 2, 00:08:03.288 "num_base_bdevs_operational": 4, 00:08:03.288 "base_bdevs_list": [ 00:08:03.288 { 00:08:03.288 "name": "BaseBdev1", 00:08:03.288 "uuid": "3d830559-3e9d-4fce-aae1-984fb180fd14", 00:08:03.288 "is_configured": true, 00:08:03.288 "data_offset": 2048, 00:08:03.288 "data_size": 63488 00:08:03.288 }, 00:08:03.288 { 00:08:03.288 "name": null, 00:08:03.288 "uuid": "1b934c14-919c-474e-9425-134c2d9d0180", 00:08:03.288 "is_configured": false, 00:08:03.288 "data_offset": 0, 00:08:03.288 "data_size": 63488 00:08:03.288 }, 00:08:03.288 { 00:08:03.288 "name": null, 00:08:03.288 "uuid": "cbe7eea2-2abb-4e7e-b729-3ae3aa5f51e2", 00:08:03.288 "is_configured": false, 00:08:03.289 "data_offset": 0, 00:08:03.289 "data_size": 63488 00:08:03.289 }, 00:08:03.289 { 00:08:03.289 "name": "BaseBdev4", 00:08:03.289 "uuid": "dede84da-0436-4b32-b330-2c2c0601b9d4", 00:08:03.289 "is_configured": true, 00:08:03.289 "data_offset": 2048, 00:08:03.289 "data_size": 63488 00:08:03.289 } 00:08:03.289 ] 00:08:03.289 }' 00:08:03.289 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.289 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.546 [2024-11-26 19:48:54.469726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.546 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.803 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.803 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.803 "name": "Existed_Raid", 00:08:03.803 "uuid": "0e1efeb2-470f-40e0-97fc-c4b2eea9beb9", 00:08:03.803 "strip_size_kb": 64, 00:08:03.803 "state": "configuring", 00:08:03.803 "raid_level": "raid0", 00:08:03.803 "superblock": true, 00:08:03.803 "num_base_bdevs": 4, 00:08:03.803 "num_base_bdevs_discovered": 3, 00:08:03.803 "num_base_bdevs_operational": 4, 00:08:03.803 "base_bdevs_list": [ 00:08:03.803 { 00:08:03.803 "name": "BaseBdev1", 00:08:03.803 "uuid": "3d830559-3e9d-4fce-aae1-984fb180fd14", 00:08:03.803 "is_configured": true, 00:08:03.803 "data_offset": 2048, 00:08:03.803 "data_size": 63488 00:08:03.803 }, 00:08:03.803 { 00:08:03.803 "name": null, 00:08:03.803 "uuid": "1b934c14-919c-474e-9425-134c2d9d0180", 00:08:03.803 "is_configured": false, 00:08:03.803 "data_offset": 0, 00:08:03.803 "data_size": 63488 00:08:03.803 }, 00:08:03.803 { 00:08:03.803 "name": "BaseBdev3", 00:08:03.803 "uuid": "cbe7eea2-2abb-4e7e-b729-3ae3aa5f51e2", 00:08:03.803 "is_configured": true, 00:08:03.803 "data_offset": 2048, 00:08:03.803 "data_size": 63488 00:08:03.803 }, 00:08:03.803 { 00:08:03.803 "name": "BaseBdev4", 00:08:03.803 "uuid": "dede84da-0436-4b32-b330-2c2c0601b9d4", 00:08:03.803 "is_configured": true, 00:08:03.803 "data_offset": 2048, 00:08:03.803 "data_size": 63488 00:08:03.803 } 00:08:03.803 ] 00:08:03.803 }' 00:08:03.803 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.803 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.061 [2024-11-26 19:48:54.821836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.061 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.061 "name": "Existed_Raid", 00:08:04.061 "uuid": "0e1efeb2-470f-40e0-97fc-c4b2eea9beb9", 00:08:04.061 "strip_size_kb": 64, 00:08:04.061 "state": "configuring", 00:08:04.061 "raid_level": "raid0", 00:08:04.061 "superblock": true, 00:08:04.061 "num_base_bdevs": 4, 00:08:04.061 "num_base_bdevs_discovered": 2, 00:08:04.061 "num_base_bdevs_operational": 4, 00:08:04.061 "base_bdevs_list": [ 00:08:04.061 { 00:08:04.062 "name": null, 00:08:04.062 "uuid": "3d830559-3e9d-4fce-aae1-984fb180fd14", 00:08:04.062 "is_configured": false, 00:08:04.062 "data_offset": 0, 00:08:04.062 "data_size": 63488 00:08:04.062 }, 00:08:04.062 { 00:08:04.062 "name": null, 00:08:04.062 "uuid": "1b934c14-919c-474e-9425-134c2d9d0180", 00:08:04.062 "is_configured": false, 00:08:04.062 "data_offset": 0, 00:08:04.062 "data_size": 63488 00:08:04.062 }, 00:08:04.062 { 00:08:04.062 "name": "BaseBdev3", 00:08:04.062 "uuid": "cbe7eea2-2abb-4e7e-b729-3ae3aa5f51e2", 00:08:04.062 "is_configured": true, 00:08:04.062 "data_offset": 2048, 00:08:04.062 "data_size": 63488 00:08:04.062 }, 00:08:04.062 { 00:08:04.062 "name": "BaseBdev4", 00:08:04.062 "uuid": "dede84da-0436-4b32-b330-2c2c0601b9d4", 00:08:04.062 "is_configured": true, 00:08:04.062 "data_offset": 2048, 00:08:04.062 "data_size": 63488 00:08:04.062 } 00:08:04.062 ] 00:08:04.062 }' 00:08:04.062 19:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.062 19:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.319 [2024-11-26 19:48:55.235777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.319 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:04.320 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.320 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.320 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.320 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.320 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.320 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.320 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.320 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.577 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.577 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.577 "name": "Existed_Raid", 00:08:04.577 "uuid": "0e1efeb2-470f-40e0-97fc-c4b2eea9beb9", 00:08:04.577 "strip_size_kb": 64, 00:08:04.577 "state": "configuring", 00:08:04.577 "raid_level": "raid0", 00:08:04.577 "superblock": true, 00:08:04.577 "num_base_bdevs": 4, 00:08:04.577 "num_base_bdevs_discovered": 3, 00:08:04.577 "num_base_bdevs_operational": 4, 00:08:04.577 "base_bdevs_list": [ 00:08:04.577 { 00:08:04.577 "name": null, 00:08:04.577 "uuid": "3d830559-3e9d-4fce-aae1-984fb180fd14", 00:08:04.577 "is_configured": false, 00:08:04.577 "data_offset": 0, 00:08:04.577 "data_size": 63488 00:08:04.577 }, 00:08:04.577 { 00:08:04.577 "name": "BaseBdev2", 00:08:04.577 "uuid": "1b934c14-919c-474e-9425-134c2d9d0180", 00:08:04.577 "is_configured": true, 00:08:04.577 "data_offset": 2048, 00:08:04.577 "data_size": 63488 00:08:04.577 }, 00:08:04.577 { 00:08:04.577 "name": "BaseBdev3", 00:08:04.577 "uuid": "cbe7eea2-2abb-4e7e-b729-3ae3aa5f51e2", 00:08:04.577 "is_configured": true, 00:08:04.577 "data_offset": 2048, 00:08:04.577 "data_size": 63488 00:08:04.577 }, 00:08:04.577 { 00:08:04.577 "name": "BaseBdev4", 00:08:04.577 "uuid": "dede84da-0436-4b32-b330-2c2c0601b9d4", 00:08:04.577 "is_configured": true, 00:08:04.577 "data_offset": 2048, 00:08:04.577 "data_size": 63488 00:08:04.577 } 00:08:04.577 ] 00:08:04.577 }' 00:08:04.577 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.577 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3d830559-3e9d-4fce-aae1-984fb180fd14 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.835 [2024-11-26 19:48:55.612181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:04.835 NewBaseBdev 00:08:04.835 [2024-11-26 19:48:55.612549] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:04.835 [2024-11-26 19:48:55.612565] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:04.835 [2024-11-26 19:48:55.612794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:04.835 [2024-11-26 19:48:55.612903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:04.835 [2024-11-26 19:48:55.612913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:04.835 [2024-11-26 19:48:55.613013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:04.835 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.836 [ 00:08:04.836 { 00:08:04.836 "name": "NewBaseBdev", 00:08:04.836 "aliases": [ 00:08:04.836 "3d830559-3e9d-4fce-aae1-984fb180fd14" 00:08:04.836 ], 00:08:04.836 "product_name": "Malloc disk", 00:08:04.836 "block_size": 512, 00:08:04.836 "num_blocks": 65536, 00:08:04.836 "uuid": "3d830559-3e9d-4fce-aae1-984fb180fd14", 00:08:04.836 "assigned_rate_limits": { 00:08:04.836 "rw_ios_per_sec": 0, 00:08:04.836 "rw_mbytes_per_sec": 0, 00:08:04.836 "r_mbytes_per_sec": 0, 00:08:04.836 "w_mbytes_per_sec": 0 00:08:04.836 }, 00:08:04.836 "claimed": true, 00:08:04.836 "claim_type": "exclusive_write", 00:08:04.836 "zoned": false, 00:08:04.836 "supported_io_types": { 00:08:04.836 "read": true, 00:08:04.836 "write": true, 00:08:04.836 "unmap": true, 00:08:04.836 "flush": true, 00:08:04.836 "reset": true, 00:08:04.836 "nvme_admin": false, 00:08:04.836 "nvme_io": false, 00:08:04.836 "nvme_io_md": false, 00:08:04.836 "write_zeroes": true, 00:08:04.836 "zcopy": true, 00:08:04.836 "get_zone_info": false, 00:08:04.836 "zone_management": false, 00:08:04.836 "zone_append": false, 00:08:04.836 "compare": false, 00:08:04.836 "compare_and_write": false, 00:08:04.836 "abort": true, 00:08:04.836 "seek_hole": false, 00:08:04.836 "seek_data": false, 00:08:04.836 "copy": true, 00:08:04.836 "nvme_iov_md": false 00:08:04.836 }, 00:08:04.836 "memory_domains": [ 00:08:04.836 { 00:08:04.836 "dma_device_id": "system", 00:08:04.836 "dma_device_type": 1 00:08:04.836 }, 00:08:04.836 { 00:08:04.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.836 "dma_device_type": 2 00:08:04.836 } 00:08:04.836 ], 00:08:04.836 "driver_specific": {} 00:08:04.836 } 00:08:04.836 ] 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.836 "name": "Existed_Raid", 00:08:04.836 "uuid": "0e1efeb2-470f-40e0-97fc-c4b2eea9beb9", 00:08:04.836 "strip_size_kb": 64, 00:08:04.836 "state": "online", 00:08:04.836 "raid_level": "raid0", 00:08:04.836 "superblock": true, 00:08:04.836 "num_base_bdevs": 4, 00:08:04.836 "num_base_bdevs_discovered": 4, 00:08:04.836 "num_base_bdevs_operational": 4, 00:08:04.836 "base_bdevs_list": [ 00:08:04.836 { 00:08:04.836 "name": "NewBaseBdev", 00:08:04.836 "uuid": "3d830559-3e9d-4fce-aae1-984fb180fd14", 00:08:04.836 "is_configured": true, 00:08:04.836 "data_offset": 2048, 00:08:04.836 "data_size": 63488 00:08:04.836 }, 00:08:04.836 { 00:08:04.836 "name": "BaseBdev2", 00:08:04.836 "uuid": "1b934c14-919c-474e-9425-134c2d9d0180", 00:08:04.836 "is_configured": true, 00:08:04.836 "data_offset": 2048, 00:08:04.836 "data_size": 63488 00:08:04.836 }, 00:08:04.836 { 00:08:04.836 "name": "BaseBdev3", 00:08:04.836 "uuid": "cbe7eea2-2abb-4e7e-b729-3ae3aa5f51e2", 00:08:04.836 "is_configured": true, 00:08:04.836 "data_offset": 2048, 00:08:04.836 "data_size": 63488 00:08:04.836 }, 00:08:04.836 { 00:08:04.836 "name": "BaseBdev4", 00:08:04.836 "uuid": "dede84da-0436-4b32-b330-2c2c0601b9d4", 00:08:04.836 "is_configured": true, 00:08:04.836 "data_offset": 2048, 00:08:04.836 "data_size": 63488 00:08:04.836 } 00:08:04.836 ] 00:08:04.836 }' 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.836 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.094 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:05.094 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:05.094 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.094 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.094 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.094 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.094 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:05.094 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.094 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.094 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.094 [2024-11-26 19:48:55.960649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.094 19:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.094 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:05.094 "name": "Existed_Raid", 00:08:05.094 "aliases": [ 00:08:05.094 "0e1efeb2-470f-40e0-97fc-c4b2eea9beb9" 00:08:05.094 ], 00:08:05.094 "product_name": "Raid Volume", 00:08:05.094 "block_size": 512, 00:08:05.094 "num_blocks": 253952, 00:08:05.094 "uuid": "0e1efeb2-470f-40e0-97fc-c4b2eea9beb9", 00:08:05.094 "assigned_rate_limits": { 00:08:05.094 "rw_ios_per_sec": 0, 00:08:05.094 "rw_mbytes_per_sec": 0, 00:08:05.094 "r_mbytes_per_sec": 0, 00:08:05.094 "w_mbytes_per_sec": 0 00:08:05.094 }, 00:08:05.094 "claimed": false, 00:08:05.094 "zoned": false, 00:08:05.094 "supported_io_types": { 00:08:05.094 "read": true, 00:08:05.094 "write": true, 00:08:05.094 "unmap": true, 00:08:05.094 "flush": true, 00:08:05.094 "reset": true, 00:08:05.094 "nvme_admin": false, 00:08:05.094 "nvme_io": false, 00:08:05.094 "nvme_io_md": false, 00:08:05.094 "write_zeroes": true, 00:08:05.094 "zcopy": false, 00:08:05.094 "get_zone_info": false, 00:08:05.094 "zone_management": false, 00:08:05.094 "zone_append": false, 00:08:05.094 "compare": false, 00:08:05.094 "compare_and_write": false, 00:08:05.094 "abort": false, 00:08:05.094 "seek_hole": false, 00:08:05.094 "seek_data": false, 00:08:05.094 "copy": false, 00:08:05.094 "nvme_iov_md": false 00:08:05.094 }, 00:08:05.094 "memory_domains": [ 00:08:05.094 { 00:08:05.095 "dma_device_id": "system", 00:08:05.095 "dma_device_type": 1 00:08:05.095 }, 00:08:05.095 { 00:08:05.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.095 "dma_device_type": 2 00:08:05.095 }, 00:08:05.095 { 00:08:05.095 "dma_device_id": "system", 00:08:05.095 "dma_device_type": 1 00:08:05.095 }, 00:08:05.095 { 00:08:05.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.095 "dma_device_type": 2 00:08:05.095 }, 00:08:05.095 { 00:08:05.095 "dma_device_id": "system", 00:08:05.095 "dma_device_type": 1 00:08:05.095 }, 00:08:05.095 { 00:08:05.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.095 "dma_device_type": 2 00:08:05.095 }, 00:08:05.095 { 00:08:05.095 "dma_device_id": "system", 00:08:05.095 "dma_device_type": 1 00:08:05.095 }, 00:08:05.095 { 00:08:05.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.095 "dma_device_type": 2 00:08:05.095 } 00:08:05.095 ], 00:08:05.095 "driver_specific": { 00:08:05.095 "raid": { 00:08:05.095 "uuid": "0e1efeb2-470f-40e0-97fc-c4b2eea9beb9", 00:08:05.095 "strip_size_kb": 64, 00:08:05.095 "state": "online", 00:08:05.095 "raid_level": "raid0", 00:08:05.095 "superblock": true, 00:08:05.095 "num_base_bdevs": 4, 00:08:05.095 "num_base_bdevs_discovered": 4, 00:08:05.095 "num_base_bdevs_operational": 4, 00:08:05.095 "base_bdevs_list": [ 00:08:05.095 { 00:08:05.095 "name": "NewBaseBdev", 00:08:05.095 "uuid": "3d830559-3e9d-4fce-aae1-984fb180fd14", 00:08:05.095 "is_configured": true, 00:08:05.095 "data_offset": 2048, 00:08:05.095 "data_size": 63488 00:08:05.095 }, 00:08:05.095 { 00:08:05.095 "name": "BaseBdev2", 00:08:05.095 "uuid": "1b934c14-919c-474e-9425-134c2d9d0180", 00:08:05.095 "is_configured": true, 00:08:05.095 "data_offset": 2048, 00:08:05.095 "data_size": 63488 00:08:05.095 }, 00:08:05.095 { 00:08:05.095 "name": "BaseBdev3", 00:08:05.095 "uuid": "cbe7eea2-2abb-4e7e-b729-3ae3aa5f51e2", 00:08:05.095 "is_configured": true, 00:08:05.095 "data_offset": 2048, 00:08:05.095 "data_size": 63488 00:08:05.095 }, 00:08:05.095 { 00:08:05.095 "name": "BaseBdev4", 00:08:05.095 "uuid": "dede84da-0436-4b32-b330-2c2c0601b9d4", 00:08:05.095 "is_configured": true, 00:08:05.095 "data_offset": 2048, 00:08:05.095 "data_size": 63488 00:08:05.095 } 00:08:05.095 ] 00:08:05.095 } 00:08:05.095 } 00:08:05.095 }' 00:08:05.095 19:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.095 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:05.095 BaseBdev2 00:08:05.095 BaseBdev3 00:08:05.095 BaseBdev4' 00:08:05.095 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.353 [2024-11-26 19:48:56.192359] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.353 [2024-11-26 19:48:56.192393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.353 [2024-11-26 19:48:56.192472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.353 [2024-11-26 19:48:56.192539] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.353 [2024-11-26 19:48:56.192548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68313 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68313 ']' 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68313 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68313 00:08:05.353 killing process with pid 68313 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68313' 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68313 00:08:05.353 [2024-11-26 19:48:56.224241] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.353 19:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68313 00:08:05.611 [2024-11-26 19:48:56.426415] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.279 19:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:06.279 00:08:06.279 real 0m8.129s 00:08:06.279 user 0m12.976s 00:08:06.279 sys 0m1.450s 00:08:06.279 19:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.279 ************************************ 00:08:06.279 END TEST raid_state_function_test_sb 00:08:06.279 ************************************ 00:08:06.279 19:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.279 19:48:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:08:06.279 19:48:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:06.279 19:48:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.279 19:48:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.279 ************************************ 00:08:06.279 START TEST raid_superblock_test 00:08:06.279 ************************************ 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68950 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68950 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68950 ']' 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.279 19:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.279 [2024-11-26 19:48:57.145974] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:06.279 [2024-11-26 19:48:57.146098] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68950 ] 00:08:06.535 [2024-11-26 19:48:57.300770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.535 [2024-11-26 19:48:57.402213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.791 [2024-11-26 19:48:57.522901] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.792 [2024-11-26 19:48:57.522967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.354 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.354 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:07.354 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:07.354 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.354 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:07.354 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:07.354 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.355 malloc1 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.355 [2024-11-26 19:48:58.070724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:07.355 [2024-11-26 19:48:58.070940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.355 [2024-11-26 19:48:58.070968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:07.355 [2024-11-26 19:48:58.070978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.355 [2024-11-26 19:48:58.072948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.355 [2024-11-26 19:48:58.072980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:07.355 pt1 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.355 malloc2 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.355 [2024-11-26 19:48:58.104438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:07.355 [2024-11-26 19:48:58.104488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.355 [2024-11-26 19:48:58.104512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:07.355 [2024-11-26 19:48:58.104519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.355 [2024-11-26 19:48:58.106503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.355 [2024-11-26 19:48:58.106532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:07.355 pt2 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.355 malloc3 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.355 [2024-11-26 19:48:58.153961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:07.355 [2024-11-26 19:48:58.154014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.355 [2024-11-26 19:48:58.154034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:07.355 [2024-11-26 19:48:58.154042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.355 [2024-11-26 19:48:58.155987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.355 [2024-11-26 19:48:58.156139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:07.355 pt3 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.355 malloc4 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.355 [2024-11-26 19:48:58.187615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:08:07.355 [2024-11-26 19:48:58.187659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.355 [2024-11-26 19:48:58.187673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:07.355 [2024-11-26 19:48:58.187680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.355 [2024-11-26 19:48:58.189608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.355 [2024-11-26 19:48:58.189635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:08:07.355 pt4 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.355 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.355 [2024-11-26 19:48:58.199655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:07.356 [2024-11-26 19:48:58.201297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:07.356 [2024-11-26 19:48:58.201383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:07.356 [2024-11-26 19:48:58.201422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:08:07.356 [2024-11-26 19:48:58.201581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:07.356 [2024-11-26 19:48:58.201589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:07.356 [2024-11-26 19:48:58.201809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:07.356 [2024-11-26 19:48:58.201931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:07.356 [2024-11-26 19:48:58.201940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:07.356 [2024-11-26 19:48:58.202050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.356 "name": "raid_bdev1", 00:08:07.356 "uuid": "766d3e1d-f084-4d31-91b8-9811e997f8e4", 00:08:07.356 "strip_size_kb": 64, 00:08:07.356 "state": "online", 00:08:07.356 "raid_level": "raid0", 00:08:07.356 "superblock": true, 00:08:07.356 "num_base_bdevs": 4, 00:08:07.356 "num_base_bdevs_discovered": 4, 00:08:07.356 "num_base_bdevs_operational": 4, 00:08:07.356 "base_bdevs_list": [ 00:08:07.356 { 00:08:07.356 "name": "pt1", 00:08:07.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:07.356 "is_configured": true, 00:08:07.356 "data_offset": 2048, 00:08:07.356 "data_size": 63488 00:08:07.356 }, 00:08:07.356 { 00:08:07.356 "name": "pt2", 00:08:07.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:07.356 "is_configured": true, 00:08:07.356 "data_offset": 2048, 00:08:07.356 "data_size": 63488 00:08:07.356 }, 00:08:07.356 { 00:08:07.356 "name": "pt3", 00:08:07.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:07.356 "is_configured": true, 00:08:07.356 "data_offset": 2048, 00:08:07.356 "data_size": 63488 00:08:07.356 }, 00:08:07.356 { 00:08:07.356 "name": "pt4", 00:08:07.356 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:07.356 "is_configured": true, 00:08:07.356 "data_offset": 2048, 00:08:07.356 "data_size": 63488 00:08:07.356 } 00:08:07.356 ] 00:08:07.356 }' 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.356 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.612 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:07.612 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:07.612 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:07.612 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:07.612 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:07.612 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:07.612 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:07.612 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.612 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:07.612 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.612 [2024-11-26 19:48:58.516028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.612 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.612 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:07.612 "name": "raid_bdev1", 00:08:07.612 "aliases": [ 00:08:07.612 "766d3e1d-f084-4d31-91b8-9811e997f8e4" 00:08:07.612 ], 00:08:07.612 "product_name": "Raid Volume", 00:08:07.612 "block_size": 512, 00:08:07.612 "num_blocks": 253952, 00:08:07.612 "uuid": "766d3e1d-f084-4d31-91b8-9811e997f8e4", 00:08:07.612 "assigned_rate_limits": { 00:08:07.612 "rw_ios_per_sec": 0, 00:08:07.612 "rw_mbytes_per_sec": 0, 00:08:07.612 "r_mbytes_per_sec": 0, 00:08:07.612 "w_mbytes_per_sec": 0 00:08:07.612 }, 00:08:07.612 "claimed": false, 00:08:07.612 "zoned": false, 00:08:07.612 "supported_io_types": { 00:08:07.612 "read": true, 00:08:07.612 "write": true, 00:08:07.612 "unmap": true, 00:08:07.612 "flush": true, 00:08:07.612 "reset": true, 00:08:07.612 "nvme_admin": false, 00:08:07.612 "nvme_io": false, 00:08:07.612 "nvme_io_md": false, 00:08:07.612 "write_zeroes": true, 00:08:07.612 "zcopy": false, 00:08:07.612 "get_zone_info": false, 00:08:07.612 "zone_management": false, 00:08:07.612 "zone_append": false, 00:08:07.612 "compare": false, 00:08:07.612 "compare_and_write": false, 00:08:07.612 "abort": false, 00:08:07.612 "seek_hole": false, 00:08:07.612 "seek_data": false, 00:08:07.612 "copy": false, 00:08:07.612 "nvme_iov_md": false 00:08:07.612 }, 00:08:07.612 "memory_domains": [ 00:08:07.612 { 00:08:07.612 "dma_device_id": "system", 00:08:07.612 "dma_device_type": 1 00:08:07.612 }, 00:08:07.612 { 00:08:07.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.612 "dma_device_type": 2 00:08:07.612 }, 00:08:07.612 { 00:08:07.612 "dma_device_id": "system", 00:08:07.612 "dma_device_type": 1 00:08:07.612 }, 00:08:07.612 { 00:08:07.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.612 "dma_device_type": 2 00:08:07.612 }, 00:08:07.612 { 00:08:07.612 "dma_device_id": "system", 00:08:07.612 "dma_device_type": 1 00:08:07.612 }, 00:08:07.612 { 00:08:07.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.612 "dma_device_type": 2 00:08:07.612 }, 00:08:07.612 { 00:08:07.612 "dma_device_id": "system", 00:08:07.612 "dma_device_type": 1 00:08:07.612 }, 00:08:07.612 { 00:08:07.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.612 "dma_device_type": 2 00:08:07.612 } 00:08:07.612 ], 00:08:07.612 "driver_specific": { 00:08:07.612 "raid": { 00:08:07.612 "uuid": "766d3e1d-f084-4d31-91b8-9811e997f8e4", 00:08:07.612 "strip_size_kb": 64, 00:08:07.612 "state": "online", 00:08:07.612 "raid_level": "raid0", 00:08:07.612 "superblock": true, 00:08:07.612 "num_base_bdevs": 4, 00:08:07.612 "num_base_bdevs_discovered": 4, 00:08:07.612 "num_base_bdevs_operational": 4, 00:08:07.612 "base_bdevs_list": [ 00:08:07.612 { 00:08:07.612 "name": "pt1", 00:08:07.612 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:07.612 "is_configured": true, 00:08:07.612 "data_offset": 2048, 00:08:07.612 "data_size": 63488 00:08:07.612 }, 00:08:07.612 { 00:08:07.612 "name": "pt2", 00:08:07.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:07.612 "is_configured": true, 00:08:07.612 "data_offset": 2048, 00:08:07.612 "data_size": 63488 00:08:07.612 }, 00:08:07.612 { 00:08:07.612 "name": "pt3", 00:08:07.612 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:07.613 "is_configured": true, 00:08:07.613 "data_offset": 2048, 00:08:07.613 "data_size": 63488 00:08:07.613 }, 00:08:07.613 { 00:08:07.613 "name": "pt4", 00:08:07.613 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:07.613 "is_configured": true, 00:08:07.613 "data_offset": 2048, 00:08:07.613 "data_size": 63488 00:08:07.613 } 00:08:07.613 ] 00:08:07.613 } 00:08:07.613 } 00:08:07.613 }' 00:08:07.613 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:07.869 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:07.869 pt2 00:08:07.869 pt3 00:08:07.870 pt4' 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:07.870 [2024-11-26 19:48:58.744006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=766d3e1d-f084-4d31-91b8-9811e997f8e4 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 766d3e1d-f084-4d31-91b8-9811e997f8e4 ']' 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.870 [2024-11-26 19:48:58.775731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:07.870 [2024-11-26 19:48:58.775753] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.870 [2024-11-26 19:48:58.775829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.870 [2024-11-26 19:48:58.775897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.870 [2024-11-26 19:48:58.775909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:07.870 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.127 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.127 [2024-11-26 19:48:58.891773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:08.127 [2024-11-26 19:48:58.893497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:08.127 [2024-11-26 19:48:58.893635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:08.127 [2024-11-26 19:48:58.893673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:08:08.127 [2024-11-26 19:48:58.893719] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:08.127 [2024-11-26 19:48:58.893762] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:08.127 [2024-11-26 19:48:58.893779] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:08.128 [2024-11-26 19:48:58.893795] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:08:08.128 [2024-11-26 19:48:58.893805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.128 [2024-11-26 19:48:58.893819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:08.128 request: 00:08:08.128 { 00:08:08.128 "name": "raid_bdev1", 00:08:08.128 "raid_level": "raid0", 00:08:08.128 "base_bdevs": [ 00:08:08.128 "malloc1", 00:08:08.128 "malloc2", 00:08:08.128 "malloc3", 00:08:08.128 "malloc4" 00:08:08.128 ], 00:08:08.128 "strip_size_kb": 64, 00:08:08.128 "superblock": false, 00:08:08.128 "method": "bdev_raid_create", 00:08:08.128 "req_id": 1 00:08:08.128 } 00:08:08.128 Got JSON-RPC error response 00:08:08.128 response: 00:08:08.128 { 00:08:08.128 "code": -17, 00:08:08.128 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:08.128 } 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.128 [2024-11-26 19:48:58.935751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:08.128 [2024-11-26 19:48:58.935798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.128 [2024-11-26 19:48:58.935816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:08.128 [2024-11-26 19:48:58.935825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.128 [2024-11-26 19:48:58.937767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.128 [2024-11-26 19:48:58.937799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:08.128 [2024-11-26 19:48:58.937866] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:08.128 [2024-11-26 19:48:58.937912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:08.128 pt1 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.128 "name": "raid_bdev1", 00:08:08.128 "uuid": "766d3e1d-f084-4d31-91b8-9811e997f8e4", 00:08:08.128 "strip_size_kb": 64, 00:08:08.128 "state": "configuring", 00:08:08.128 "raid_level": "raid0", 00:08:08.128 "superblock": true, 00:08:08.128 "num_base_bdevs": 4, 00:08:08.128 "num_base_bdevs_discovered": 1, 00:08:08.128 "num_base_bdevs_operational": 4, 00:08:08.128 "base_bdevs_list": [ 00:08:08.128 { 00:08:08.128 "name": "pt1", 00:08:08.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.128 "is_configured": true, 00:08:08.128 "data_offset": 2048, 00:08:08.128 "data_size": 63488 00:08:08.128 }, 00:08:08.128 { 00:08:08.128 "name": null, 00:08:08.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.128 "is_configured": false, 00:08:08.128 "data_offset": 2048, 00:08:08.128 "data_size": 63488 00:08:08.128 }, 00:08:08.128 { 00:08:08.128 "name": null, 00:08:08.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:08.128 "is_configured": false, 00:08:08.128 "data_offset": 2048, 00:08:08.128 "data_size": 63488 00:08:08.128 }, 00:08:08.128 { 00:08:08.128 "name": null, 00:08:08.128 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:08.128 "is_configured": false, 00:08:08.128 "data_offset": 2048, 00:08:08.128 "data_size": 63488 00:08:08.128 } 00:08:08.128 ] 00:08:08.128 }' 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.128 19:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.386 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:08:08.386 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:08.386 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.386 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.386 [2024-11-26 19:48:59.247844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:08.386 [2024-11-26 19:48:59.247919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.386 [2024-11-26 19:48:59.247944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:08:08.386 [2024-11-26 19:48:59.247954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.386 [2024-11-26 19:48:59.248368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.386 [2024-11-26 19:48:59.248383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:08.387 [2024-11-26 19:48:59.248458] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:08.387 [2024-11-26 19:48:59.248480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:08.387 pt2 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.387 [2024-11-26 19:48:59.255830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.387 "name": "raid_bdev1", 00:08:08.387 "uuid": "766d3e1d-f084-4d31-91b8-9811e997f8e4", 00:08:08.387 "strip_size_kb": 64, 00:08:08.387 "state": "configuring", 00:08:08.387 "raid_level": "raid0", 00:08:08.387 "superblock": true, 00:08:08.387 "num_base_bdevs": 4, 00:08:08.387 "num_base_bdevs_discovered": 1, 00:08:08.387 "num_base_bdevs_operational": 4, 00:08:08.387 "base_bdevs_list": [ 00:08:08.387 { 00:08:08.387 "name": "pt1", 00:08:08.387 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.387 "is_configured": true, 00:08:08.387 "data_offset": 2048, 00:08:08.387 "data_size": 63488 00:08:08.387 }, 00:08:08.387 { 00:08:08.387 "name": null, 00:08:08.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.387 "is_configured": false, 00:08:08.387 "data_offset": 0, 00:08:08.387 "data_size": 63488 00:08:08.387 }, 00:08:08.387 { 00:08:08.387 "name": null, 00:08:08.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:08.387 "is_configured": false, 00:08:08.387 "data_offset": 2048, 00:08:08.387 "data_size": 63488 00:08:08.387 }, 00:08:08.387 { 00:08:08.387 "name": null, 00:08:08.387 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:08.387 "is_configured": false, 00:08:08.387 "data_offset": 2048, 00:08:08.387 "data_size": 63488 00:08:08.387 } 00:08:08.387 ] 00:08:08.387 }' 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.387 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.644 [2024-11-26 19:48:59.563902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:08.644 [2024-11-26 19:48:59.563966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.644 [2024-11-26 19:48:59.563984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:08:08.644 [2024-11-26 19:48:59.563992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.644 [2024-11-26 19:48:59.564412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.644 [2024-11-26 19:48:59.564424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:08.644 [2024-11-26 19:48:59.564500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:08.644 [2024-11-26 19:48:59.564521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:08.644 pt2 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.644 [2024-11-26 19:48:59.571870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:08.644 [2024-11-26 19:48:59.571913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.644 [2024-11-26 19:48:59.571929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:08:08.644 [2024-11-26 19:48:59.571937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.644 [2024-11-26 19:48:59.572282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.644 [2024-11-26 19:48:59.572299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:08.644 [2024-11-26 19:48:59.572367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:08.644 [2024-11-26 19:48:59.572386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:08.644 pt3 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.644 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.902 [2024-11-26 19:48:59.579853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:08:08.902 [2024-11-26 19:48:59.579892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.902 [2024-11-26 19:48:59.579906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:08:08.902 [2024-11-26 19:48:59.579913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.902 [2024-11-26 19:48:59.580236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.902 [2024-11-26 19:48:59.580252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:08:08.902 [2024-11-26 19:48:59.580305] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:08:08.902 [2024-11-26 19:48:59.580322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:08:08.902 [2024-11-26 19:48:59.580446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:08.902 [2024-11-26 19:48:59.580454] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:08.902 [2024-11-26 19:48:59.580682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:08.902 [2024-11-26 19:48:59.580807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:08.902 [2024-11-26 19:48:59.580822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:08.902 [2024-11-26 19:48:59.580932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.902 pt4 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.902 "name": "raid_bdev1", 00:08:08.902 "uuid": "766d3e1d-f084-4d31-91b8-9811e997f8e4", 00:08:08.902 "strip_size_kb": 64, 00:08:08.902 "state": "online", 00:08:08.902 "raid_level": "raid0", 00:08:08.902 "superblock": true, 00:08:08.902 "num_base_bdevs": 4, 00:08:08.902 "num_base_bdevs_discovered": 4, 00:08:08.902 "num_base_bdevs_operational": 4, 00:08:08.902 "base_bdevs_list": [ 00:08:08.902 { 00:08:08.902 "name": "pt1", 00:08:08.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.902 "is_configured": true, 00:08:08.902 "data_offset": 2048, 00:08:08.902 "data_size": 63488 00:08:08.902 }, 00:08:08.902 { 00:08:08.902 "name": "pt2", 00:08:08.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.902 "is_configured": true, 00:08:08.902 "data_offset": 2048, 00:08:08.902 "data_size": 63488 00:08:08.902 }, 00:08:08.902 { 00:08:08.902 "name": "pt3", 00:08:08.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:08.902 "is_configured": true, 00:08:08.902 "data_offset": 2048, 00:08:08.902 "data_size": 63488 00:08:08.902 }, 00:08:08.902 { 00:08:08.902 "name": "pt4", 00:08:08.902 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:08.902 "is_configured": true, 00:08:08.902 "data_offset": 2048, 00:08:08.902 "data_size": 63488 00:08:08.902 } 00:08:08.902 ] 00:08:08.902 }' 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.902 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.160 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:09.160 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:09.160 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.160 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.160 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.160 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.160 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.160 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.160 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.160 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.160 [2024-11-26 19:48:59.884266] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.160 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.160 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.160 "name": "raid_bdev1", 00:08:09.160 "aliases": [ 00:08:09.160 "766d3e1d-f084-4d31-91b8-9811e997f8e4" 00:08:09.160 ], 00:08:09.160 "product_name": "Raid Volume", 00:08:09.160 "block_size": 512, 00:08:09.160 "num_blocks": 253952, 00:08:09.160 "uuid": "766d3e1d-f084-4d31-91b8-9811e997f8e4", 00:08:09.160 "assigned_rate_limits": { 00:08:09.160 "rw_ios_per_sec": 0, 00:08:09.160 "rw_mbytes_per_sec": 0, 00:08:09.160 "r_mbytes_per_sec": 0, 00:08:09.160 "w_mbytes_per_sec": 0 00:08:09.160 }, 00:08:09.160 "claimed": false, 00:08:09.160 "zoned": false, 00:08:09.160 "supported_io_types": { 00:08:09.160 "read": true, 00:08:09.160 "write": true, 00:08:09.160 "unmap": true, 00:08:09.160 "flush": true, 00:08:09.160 "reset": true, 00:08:09.160 "nvme_admin": false, 00:08:09.160 "nvme_io": false, 00:08:09.160 "nvme_io_md": false, 00:08:09.160 "write_zeroes": true, 00:08:09.160 "zcopy": false, 00:08:09.160 "get_zone_info": false, 00:08:09.160 "zone_management": false, 00:08:09.160 "zone_append": false, 00:08:09.160 "compare": false, 00:08:09.160 "compare_and_write": false, 00:08:09.160 "abort": false, 00:08:09.160 "seek_hole": false, 00:08:09.160 "seek_data": false, 00:08:09.160 "copy": false, 00:08:09.160 "nvme_iov_md": false 00:08:09.160 }, 00:08:09.160 "memory_domains": [ 00:08:09.160 { 00:08:09.160 "dma_device_id": "system", 00:08:09.160 "dma_device_type": 1 00:08:09.160 }, 00:08:09.160 { 00:08:09.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.160 "dma_device_type": 2 00:08:09.160 }, 00:08:09.160 { 00:08:09.160 "dma_device_id": "system", 00:08:09.160 "dma_device_type": 1 00:08:09.160 }, 00:08:09.160 { 00:08:09.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.160 "dma_device_type": 2 00:08:09.160 }, 00:08:09.160 { 00:08:09.160 "dma_device_id": "system", 00:08:09.160 "dma_device_type": 1 00:08:09.160 }, 00:08:09.160 { 00:08:09.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.160 "dma_device_type": 2 00:08:09.160 }, 00:08:09.160 { 00:08:09.160 "dma_device_id": "system", 00:08:09.160 "dma_device_type": 1 00:08:09.160 }, 00:08:09.160 { 00:08:09.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.160 "dma_device_type": 2 00:08:09.160 } 00:08:09.160 ], 00:08:09.160 "driver_specific": { 00:08:09.160 "raid": { 00:08:09.160 "uuid": "766d3e1d-f084-4d31-91b8-9811e997f8e4", 00:08:09.160 "strip_size_kb": 64, 00:08:09.160 "state": "online", 00:08:09.160 "raid_level": "raid0", 00:08:09.160 "superblock": true, 00:08:09.160 "num_base_bdevs": 4, 00:08:09.160 "num_base_bdevs_discovered": 4, 00:08:09.160 "num_base_bdevs_operational": 4, 00:08:09.160 "base_bdevs_list": [ 00:08:09.160 { 00:08:09.160 "name": "pt1", 00:08:09.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:09.160 "is_configured": true, 00:08:09.160 "data_offset": 2048, 00:08:09.160 "data_size": 63488 00:08:09.160 }, 00:08:09.160 { 00:08:09.160 "name": "pt2", 00:08:09.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.160 "is_configured": true, 00:08:09.160 "data_offset": 2048, 00:08:09.160 "data_size": 63488 00:08:09.160 }, 00:08:09.160 { 00:08:09.160 "name": "pt3", 00:08:09.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:09.160 "is_configured": true, 00:08:09.160 "data_offset": 2048, 00:08:09.161 "data_size": 63488 00:08:09.161 }, 00:08:09.161 { 00:08:09.161 "name": "pt4", 00:08:09.161 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:09.161 "is_configured": true, 00:08:09.161 "data_offset": 2048, 00:08:09.161 "data_size": 63488 00:08:09.161 } 00:08:09.161 ] 00:08:09.161 } 00:08:09.161 } 00:08:09.161 }' 00:08:09.161 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.161 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:09.161 pt2 00:08:09.161 pt3 00:08:09.161 pt4' 00:08:09.161 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.161 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.161 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.161 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:09.161 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.161 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.161 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.161 19:48:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.161 19:48:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.161 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.418 [2024-11-26 19:49:00.108263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 766d3e1d-f084-4d31-91b8-9811e997f8e4 '!=' 766d3e1d-f084-4d31-91b8-9811e997f8e4 ']' 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68950 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68950 ']' 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68950 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68950 00:08:09.418 killing process with pid 68950 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68950' 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68950 00:08:09.418 [2024-11-26 19:49:00.159354] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.418 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68950 00:08:09.418 [2024-11-26 19:49:00.159444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.418 [2024-11-26 19:49:00.159519] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.418 [2024-11-26 19:49:00.159528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:09.676 [2024-11-26 19:49:00.364203] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.242 19:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:10.242 00:08:10.242 real 0m3.893s 00:08:10.242 user 0m5.589s 00:08:10.242 sys 0m0.707s 00:08:10.242 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.242 ************************************ 00:08:10.242 END TEST raid_superblock_test 00:08:10.242 ************************************ 00:08:10.242 19:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.242 19:49:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:08:10.242 19:49:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:10.242 19:49:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.242 19:49:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.242 ************************************ 00:08:10.242 START TEST raid_read_error_test 00:08:10.242 ************************************ 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YfCsjZQeYM 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69198 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69198 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69198 ']' 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.242 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:10.242 [2024-11-26 19:49:01.098694] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:10.242 [2024-11-26 19:49:01.098817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69198 ] 00:08:10.526 [2024-11-26 19:49:01.257092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.526 [2024-11-26 19:49:01.374400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.784 [2024-11-26 19:49:01.520907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.784 [2024-11-26 19:49:01.520948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.044 BaseBdev1_malloc 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.044 true 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.044 [2024-11-26 19:49:01.951188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:11.044 [2024-11-26 19:49:01.951250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.044 [2024-11-26 19:49:01.951271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:11.044 [2024-11-26 19:49:01.951283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.044 [2024-11-26 19:49:01.953551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.044 [2024-11-26 19:49:01.953591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:11.044 BaseBdev1 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.044 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.303 BaseBdev2_malloc 00:08:11.303 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.303 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:11.303 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.303 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.303 true 00:08:11.303 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.303 19:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:11.303 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.303 19:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.303 [2024-11-26 19:49:01.997235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:11.303 [2024-11-26 19:49:01.997290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.303 [2024-11-26 19:49:01.997309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:11.303 [2024-11-26 19:49:01.997321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.303 [2024-11-26 19:49:01.999606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.303 [2024-11-26 19:49:01.999645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:11.303 BaseBdev2 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.303 BaseBdev3_malloc 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.303 true 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.303 [2024-11-26 19:49:02.060006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:11.303 [2024-11-26 19:49:02.060066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.303 [2024-11-26 19:49:02.060085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:11.303 [2024-11-26 19:49:02.060096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.303 [2024-11-26 19:49:02.062326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.303 [2024-11-26 19:49:02.062375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:11.303 BaseBdev3 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.303 BaseBdev4_malloc 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.303 true 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.303 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.304 [2024-11-26 19:49:02.105957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:08:11.304 [2024-11-26 19:49:02.106011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.304 [2024-11-26 19:49:02.106029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:11.304 [2024-11-26 19:49:02.106040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.304 [2024-11-26 19:49:02.108266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.304 [2024-11-26 19:49:02.108307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:08:11.304 BaseBdev4 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.304 [2024-11-26 19:49:02.114037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.304 [2024-11-26 19:49:02.115989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.304 [2024-11-26 19:49:02.116070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.304 [2024-11-26 19:49:02.116138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:11.304 [2024-11-26 19:49:02.116378] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:08:11.304 [2024-11-26 19:49:02.116399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:11.304 [2024-11-26 19:49:02.116655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:08:11.304 [2024-11-26 19:49:02.116804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:08:11.304 [2024-11-26 19:49:02.116816] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:08:11.304 [2024-11-26 19:49:02.116963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.304 "name": "raid_bdev1", 00:08:11.304 "uuid": "5882718b-8275-4943-afc4-b49d3a1d8dbf", 00:08:11.304 "strip_size_kb": 64, 00:08:11.304 "state": "online", 00:08:11.304 "raid_level": "raid0", 00:08:11.304 "superblock": true, 00:08:11.304 "num_base_bdevs": 4, 00:08:11.304 "num_base_bdevs_discovered": 4, 00:08:11.304 "num_base_bdevs_operational": 4, 00:08:11.304 "base_bdevs_list": [ 00:08:11.304 { 00:08:11.304 "name": "BaseBdev1", 00:08:11.304 "uuid": "111628a1-2c4d-59d6-a86e-757fd9c6b2f5", 00:08:11.304 "is_configured": true, 00:08:11.304 "data_offset": 2048, 00:08:11.304 "data_size": 63488 00:08:11.304 }, 00:08:11.304 { 00:08:11.304 "name": "BaseBdev2", 00:08:11.304 "uuid": "071d23bd-6f88-5f0b-af98-14461e7d6a69", 00:08:11.304 "is_configured": true, 00:08:11.304 "data_offset": 2048, 00:08:11.304 "data_size": 63488 00:08:11.304 }, 00:08:11.304 { 00:08:11.304 "name": "BaseBdev3", 00:08:11.304 "uuid": "312f4637-79f0-5e3b-b050-951e05d6dd26", 00:08:11.304 "is_configured": true, 00:08:11.304 "data_offset": 2048, 00:08:11.304 "data_size": 63488 00:08:11.304 }, 00:08:11.304 { 00:08:11.304 "name": "BaseBdev4", 00:08:11.304 "uuid": "b9a38c46-3f21-5a55-9396-53be06a28ac0", 00:08:11.304 "is_configured": true, 00:08:11.304 "data_offset": 2048, 00:08:11.304 "data_size": 63488 00:08:11.304 } 00:08:11.304 ] 00:08:11.304 }' 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.304 19:49:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.562 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:11.562 19:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:11.822 [2024-11-26 19:49:02.519158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.766 "name": "raid_bdev1", 00:08:12.766 "uuid": "5882718b-8275-4943-afc4-b49d3a1d8dbf", 00:08:12.766 "strip_size_kb": 64, 00:08:12.766 "state": "online", 00:08:12.766 "raid_level": "raid0", 00:08:12.766 "superblock": true, 00:08:12.766 "num_base_bdevs": 4, 00:08:12.766 "num_base_bdevs_discovered": 4, 00:08:12.766 "num_base_bdevs_operational": 4, 00:08:12.766 "base_bdevs_list": [ 00:08:12.766 { 00:08:12.766 "name": "BaseBdev1", 00:08:12.766 "uuid": "111628a1-2c4d-59d6-a86e-757fd9c6b2f5", 00:08:12.766 "is_configured": true, 00:08:12.766 "data_offset": 2048, 00:08:12.766 "data_size": 63488 00:08:12.766 }, 00:08:12.766 { 00:08:12.766 "name": "BaseBdev2", 00:08:12.766 "uuid": "071d23bd-6f88-5f0b-af98-14461e7d6a69", 00:08:12.766 "is_configured": true, 00:08:12.766 "data_offset": 2048, 00:08:12.766 "data_size": 63488 00:08:12.766 }, 00:08:12.766 { 00:08:12.766 "name": "BaseBdev3", 00:08:12.766 "uuid": "312f4637-79f0-5e3b-b050-951e05d6dd26", 00:08:12.766 "is_configured": true, 00:08:12.766 "data_offset": 2048, 00:08:12.766 "data_size": 63488 00:08:12.766 }, 00:08:12.766 { 00:08:12.766 "name": "BaseBdev4", 00:08:12.766 "uuid": "b9a38c46-3f21-5a55-9396-53be06a28ac0", 00:08:12.766 "is_configured": true, 00:08:12.766 "data_offset": 2048, 00:08:12.766 "data_size": 63488 00:08:12.766 } 00:08:12.766 ] 00:08:12.766 }' 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.766 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.028 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.028 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.028 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.028 [2024-11-26 19:49:03.745558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.028 [2024-11-26 19:49:03.745599] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.028 [2024-11-26 19:49:03.748664] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.028 [2024-11-26 19:49:03.748734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.028 [2024-11-26 19:49:03.748784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.028 [2024-11-26 19:49:03.748797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:08:13.028 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.028 19:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69198 00:08:13.028 { 00:08:13.029 "results": [ 00:08:13.029 { 00:08:13.029 "job": "raid_bdev1", 00:08:13.029 "core_mask": "0x1", 00:08:13.029 "workload": "randrw", 00:08:13.029 "percentage": 50, 00:08:13.029 "status": "finished", 00:08:13.029 "queue_depth": 1, 00:08:13.029 "io_size": 131072, 00:08:13.029 "runtime": 1.224384, 00:08:13.029 "iops": 13941.704563274267, 00:08:13.029 "mibps": 1742.7130704092833, 00:08:13.029 "io_failed": 1, 00:08:13.029 "io_timeout": 0, 00:08:13.029 "avg_latency_us": 98.54769735448782, 00:08:13.029 "min_latency_us": 33.47692307692308, 00:08:13.029 "max_latency_us": 1714.0184615384615 00:08:13.029 } 00:08:13.029 ], 00:08:13.029 "core_count": 1 00:08:13.029 } 00:08:13.029 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69198 ']' 00:08:13.029 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69198 00:08:13.029 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:13.029 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.029 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69198 00:08:13.029 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.029 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.029 killing process with pid 69198 00:08:13.029 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69198' 00:08:13.029 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69198 00:08:13.029 19:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69198 00:08:13.029 [2024-11-26 19:49:03.776759] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.291 [2024-11-26 19:49:03.991071] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.902 19:49:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YfCsjZQeYM 00:08:13.902 19:49:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:13.902 19:49:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:13.902 19:49:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.82 00:08:13.902 19:49:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:13.902 19:49:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.902 19:49:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:13.902 19:49:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.82 != \0\.\0\0 ]] 00:08:13.902 00:08:13.902 real 0m3.782s 00:08:13.902 user 0m4.347s 00:08:13.902 sys 0m0.467s 00:08:13.902 19:49:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.902 19:49:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.902 ************************************ 00:08:13.902 END TEST raid_read_error_test 00:08:13.902 ************************************ 00:08:14.164 19:49:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:08:14.164 19:49:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:14.164 19:49:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.164 19:49:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:14.164 ************************************ 00:08:14.164 START TEST raid_write_error_test 00:08:14.164 ************************************ 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iredbBNFE8 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69333 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69333 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69333 ']' 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.164 19:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.164 [2024-11-26 19:49:04.976441] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:14.164 [2024-11-26 19:49:04.976568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69333 ] 00:08:14.424 [2024-11-26 19:49:05.130339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.424 [2024-11-26 19:49:05.245906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.823 [2024-11-26 19:49:05.392062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.823 [2024-11-26 19:49:05.392109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.084 BaseBdev1_malloc 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.084 true 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.084 [2024-11-26 19:49:05.837044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:15.084 [2024-11-26 19:49:05.837106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.084 [2024-11-26 19:49:05.837126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:15.084 [2024-11-26 19:49:05.837137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.084 [2024-11-26 19:49:05.839496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.084 [2024-11-26 19:49:05.839534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:15.084 BaseBdev1 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.084 BaseBdev2_malloc 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.084 true 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:15.084 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.085 [2024-11-26 19:49:05.886904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:15.085 [2024-11-26 19:49:05.886966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.085 [2024-11-26 19:49:05.886983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:15.085 [2024-11-26 19:49:05.886994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.085 [2024-11-26 19:49:05.889206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.085 [2024-11-26 19:49:05.889243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:15.085 BaseBdev2 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.085 BaseBdev3_malloc 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.085 true 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.085 [2024-11-26 19:49:05.946973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:15.085 [2024-11-26 19:49:05.947029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.085 [2024-11-26 19:49:05.947049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:15.085 [2024-11-26 19:49:05.947061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.085 [2024-11-26 19:49:05.949316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.085 [2024-11-26 19:49:05.949362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:15.085 BaseBdev3 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.085 BaseBdev4_malloc 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.085 true 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.085 [2024-11-26 19:49:05.992989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:08:15.085 [2024-11-26 19:49:05.993041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.085 [2024-11-26 19:49:05.993059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:15.085 [2024-11-26 19:49:05.993072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.085 [2024-11-26 19:49:05.995279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.085 [2024-11-26 19:49:05.995315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:08:15.085 BaseBdev4 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.085 19:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.085 [2024-11-26 19:49:06.001061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.085 [2024-11-26 19:49:06.002995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.085 [2024-11-26 19:49:06.003076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:15.085 [2024-11-26 19:49:06.003144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:15.085 [2024-11-26 19:49:06.003391] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:08:15.085 [2024-11-26 19:49:06.003407] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:15.085 [2024-11-26 19:49:06.003667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:08:15.085 [2024-11-26 19:49:06.003823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:08:15.085 [2024-11-26 19:49:06.003834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:08:15.085 [2024-11-26 19:49:06.003981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.085 19:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.346 19:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.346 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.346 "name": "raid_bdev1", 00:08:15.346 "uuid": "15ca5154-4938-41fe-8b56-73e380c05c56", 00:08:15.346 "strip_size_kb": 64, 00:08:15.346 "state": "online", 00:08:15.346 "raid_level": "raid0", 00:08:15.346 "superblock": true, 00:08:15.346 "num_base_bdevs": 4, 00:08:15.346 "num_base_bdevs_discovered": 4, 00:08:15.346 "num_base_bdevs_operational": 4, 00:08:15.346 "base_bdevs_list": [ 00:08:15.346 { 00:08:15.346 "name": "BaseBdev1", 00:08:15.346 "uuid": "3f6c1652-6a98-5d40-be2d-0a1cc914e209", 00:08:15.346 "is_configured": true, 00:08:15.346 "data_offset": 2048, 00:08:15.346 "data_size": 63488 00:08:15.346 }, 00:08:15.346 { 00:08:15.346 "name": "BaseBdev2", 00:08:15.346 "uuid": "106c06e3-fa37-5342-b886-d669d3b7665a", 00:08:15.346 "is_configured": true, 00:08:15.346 "data_offset": 2048, 00:08:15.346 "data_size": 63488 00:08:15.346 }, 00:08:15.346 { 00:08:15.346 "name": "BaseBdev3", 00:08:15.346 "uuid": "abde11a3-267b-5050-aaa9-82dd70f480db", 00:08:15.346 "is_configured": true, 00:08:15.346 "data_offset": 2048, 00:08:15.346 "data_size": 63488 00:08:15.346 }, 00:08:15.346 { 00:08:15.346 "name": "BaseBdev4", 00:08:15.346 "uuid": "afd6ca06-9640-5d6d-ae2c-46132759abec", 00:08:15.346 "is_configured": true, 00:08:15.346 "data_offset": 2048, 00:08:15.346 "data_size": 63488 00:08:15.346 } 00:08:15.346 ] 00:08:15.346 }' 00:08:15.346 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.346 19:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.606 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:15.606 19:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:15.606 [2024-11-26 19:49:06.406153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.542 "name": "raid_bdev1", 00:08:16.542 "uuid": "15ca5154-4938-41fe-8b56-73e380c05c56", 00:08:16.542 "strip_size_kb": 64, 00:08:16.542 "state": "online", 00:08:16.542 "raid_level": "raid0", 00:08:16.542 "superblock": true, 00:08:16.542 "num_base_bdevs": 4, 00:08:16.542 "num_base_bdevs_discovered": 4, 00:08:16.542 "num_base_bdevs_operational": 4, 00:08:16.542 "base_bdevs_list": [ 00:08:16.542 { 00:08:16.542 "name": "BaseBdev1", 00:08:16.542 "uuid": "3f6c1652-6a98-5d40-be2d-0a1cc914e209", 00:08:16.542 "is_configured": true, 00:08:16.542 "data_offset": 2048, 00:08:16.542 "data_size": 63488 00:08:16.542 }, 00:08:16.542 { 00:08:16.542 "name": "BaseBdev2", 00:08:16.542 "uuid": "106c06e3-fa37-5342-b886-d669d3b7665a", 00:08:16.542 "is_configured": true, 00:08:16.542 "data_offset": 2048, 00:08:16.542 "data_size": 63488 00:08:16.542 }, 00:08:16.542 { 00:08:16.542 "name": "BaseBdev3", 00:08:16.542 "uuid": "abde11a3-267b-5050-aaa9-82dd70f480db", 00:08:16.542 "is_configured": true, 00:08:16.542 "data_offset": 2048, 00:08:16.542 "data_size": 63488 00:08:16.542 }, 00:08:16.542 { 00:08:16.542 "name": "BaseBdev4", 00:08:16.542 "uuid": "afd6ca06-9640-5d6d-ae2c-46132759abec", 00:08:16.542 "is_configured": true, 00:08:16.542 "data_offset": 2048, 00:08:16.542 "data_size": 63488 00:08:16.542 } 00:08:16.542 ] 00:08:16.542 }' 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.542 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.801 [2024-11-26 19:49:07.636846] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:16.801 [2024-11-26 19:49:07.636877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.801 [2024-11-26 19:49:07.639945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.801 [2024-11-26 19:49:07.640012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.801 [2024-11-26 19:49:07.640059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.801 [2024-11-26 19:49:07.640070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:08:16.801 { 00:08:16.801 "results": [ 00:08:16.801 { 00:08:16.801 "job": "raid_bdev1", 00:08:16.801 "core_mask": "0x1", 00:08:16.801 "workload": "randrw", 00:08:16.801 "percentage": 50, 00:08:16.801 "status": "finished", 00:08:16.801 "queue_depth": 1, 00:08:16.801 "io_size": 131072, 00:08:16.801 "runtime": 1.228435, 00:08:16.801 "iops": 13988.530121658858, 00:08:16.801 "mibps": 1748.5662652073572, 00:08:16.801 "io_failed": 1, 00:08:16.801 "io_timeout": 0, 00:08:16.801 "avg_latency_us": 98.16840554150534, 00:08:16.801 "min_latency_us": 33.47692307692308, 00:08:16.801 "max_latency_us": 1676.2092307692308 00:08:16.801 } 00:08:16.801 ], 00:08:16.801 "core_count": 1 00:08:16.801 } 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69333 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69333 ']' 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69333 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69333 00:08:16.801 killing process with pid 69333 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69333' 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69333 00:08:16.801 [2024-11-26 19:49:07.668888] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.801 19:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69333 00:08:17.060 [2024-11-26 19:49:07.878828] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.998 19:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:17.998 19:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iredbBNFE8 00:08:17.998 19:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:17.998 ************************************ 00:08:17.998 END TEST raid_write_error_test 00:08:17.998 ************************************ 00:08:17.998 19:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:08:17.998 19:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:17.998 19:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.998 19:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:17.998 19:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:08:17.998 00:08:17.998 real 0m3.809s 00:08:17.998 user 0m4.405s 00:08:17.998 sys 0m0.476s 00:08:17.998 19:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.998 19:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.998 19:49:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:17.998 19:49:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:08:17.998 19:49:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:17.998 19:49:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.998 19:49:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.998 ************************************ 00:08:17.998 START TEST raid_state_function_test 00:08:17.998 ************************************ 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:17.998 Process raid pid: 69471 00:08:17.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69471 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69471' 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69471 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69471 ']' 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.998 19:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.998 [2024-11-26 19:49:08.786135] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:17.999 [2024-11-26 19:49:08.786260] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.257 [2024-11-26 19:49:08.946455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.257 [2024-11-26 19:49:09.046038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.257 [2024-11-26 19:49:09.183594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.257 [2024-11-26 19:49:09.183630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.823 [2024-11-26 19:49:09.634146] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:18.823 [2024-11-26 19:49:09.634198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:18.823 [2024-11-26 19:49:09.634208] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.823 [2024-11-26 19:49:09.634217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.823 [2024-11-26 19:49:09.634224] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:18.823 [2024-11-26 19:49:09.634233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:18.823 [2024-11-26 19:49:09.634239] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:18.823 [2024-11-26 19:49:09.634248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.823 "name": "Existed_Raid", 00:08:18.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.823 "strip_size_kb": 64, 00:08:18.823 "state": "configuring", 00:08:18.823 "raid_level": "concat", 00:08:18.823 "superblock": false, 00:08:18.823 "num_base_bdevs": 4, 00:08:18.823 "num_base_bdevs_discovered": 0, 00:08:18.823 "num_base_bdevs_operational": 4, 00:08:18.823 "base_bdevs_list": [ 00:08:18.823 { 00:08:18.823 "name": "BaseBdev1", 00:08:18.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.823 "is_configured": false, 00:08:18.823 "data_offset": 0, 00:08:18.823 "data_size": 0 00:08:18.823 }, 00:08:18.823 { 00:08:18.823 "name": "BaseBdev2", 00:08:18.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.823 "is_configured": false, 00:08:18.823 "data_offset": 0, 00:08:18.823 "data_size": 0 00:08:18.823 }, 00:08:18.823 { 00:08:18.823 "name": "BaseBdev3", 00:08:18.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.823 "is_configured": false, 00:08:18.823 "data_offset": 0, 00:08:18.823 "data_size": 0 00:08:18.823 }, 00:08:18.823 { 00:08:18.823 "name": "BaseBdev4", 00:08:18.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.823 "is_configured": false, 00:08:18.823 "data_offset": 0, 00:08:18.823 "data_size": 0 00:08:18.823 } 00:08:18.823 ] 00:08:18.823 }' 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.823 19:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.110 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.110 19:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.110 19:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.110 [2024-11-26 19:49:09.990223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.110 [2024-11-26 19:49:09.990277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:19.110 19:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.110 19:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:19.110 19:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.110 19:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.110 [2024-11-26 19:49:09.998204] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:19.110 [2024-11-26 19:49:09.998249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:19.111 [2024-11-26 19:49:09.998259] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.111 [2024-11-26 19:49:09.998270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.111 [2024-11-26 19:49:09.998277] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.111 [2024-11-26 19:49:09.998287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.111 [2024-11-26 19:49:09.998294] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:19.111 [2024-11-26 19:49:09.998304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.111 [2024-11-26 19:49:10.033028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.111 BaseBdev1 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.111 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.372 [ 00:08:19.372 { 00:08:19.372 "name": "BaseBdev1", 00:08:19.372 "aliases": [ 00:08:19.372 "ac367649-3ffe-4c33-92c7-29ea21d0b7ae" 00:08:19.372 ], 00:08:19.372 "product_name": "Malloc disk", 00:08:19.372 "block_size": 512, 00:08:19.372 "num_blocks": 65536, 00:08:19.372 "uuid": "ac367649-3ffe-4c33-92c7-29ea21d0b7ae", 00:08:19.372 "assigned_rate_limits": { 00:08:19.372 "rw_ios_per_sec": 0, 00:08:19.372 "rw_mbytes_per_sec": 0, 00:08:19.372 "r_mbytes_per_sec": 0, 00:08:19.372 "w_mbytes_per_sec": 0 00:08:19.372 }, 00:08:19.372 "claimed": true, 00:08:19.372 "claim_type": "exclusive_write", 00:08:19.372 "zoned": false, 00:08:19.372 "supported_io_types": { 00:08:19.372 "read": true, 00:08:19.372 "write": true, 00:08:19.372 "unmap": true, 00:08:19.372 "flush": true, 00:08:19.372 "reset": true, 00:08:19.372 "nvme_admin": false, 00:08:19.372 "nvme_io": false, 00:08:19.372 "nvme_io_md": false, 00:08:19.372 "write_zeroes": true, 00:08:19.372 "zcopy": true, 00:08:19.372 "get_zone_info": false, 00:08:19.372 "zone_management": false, 00:08:19.372 "zone_append": false, 00:08:19.372 "compare": false, 00:08:19.372 "compare_and_write": false, 00:08:19.372 "abort": true, 00:08:19.372 "seek_hole": false, 00:08:19.372 "seek_data": false, 00:08:19.372 "copy": true, 00:08:19.372 "nvme_iov_md": false 00:08:19.372 }, 00:08:19.372 "memory_domains": [ 00:08:19.372 { 00:08:19.372 "dma_device_id": "system", 00:08:19.372 "dma_device_type": 1 00:08:19.372 }, 00:08:19.372 { 00:08:19.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.372 "dma_device_type": 2 00:08:19.372 } 00:08:19.372 ], 00:08:19.372 "driver_specific": {} 00:08:19.372 } 00:08:19.372 ] 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.372 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.372 "name": "Existed_Raid", 00:08:19.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.372 "strip_size_kb": 64, 00:08:19.372 "state": "configuring", 00:08:19.372 "raid_level": "concat", 00:08:19.372 "superblock": false, 00:08:19.372 "num_base_bdevs": 4, 00:08:19.372 "num_base_bdevs_discovered": 1, 00:08:19.372 "num_base_bdevs_operational": 4, 00:08:19.372 "base_bdevs_list": [ 00:08:19.372 { 00:08:19.373 "name": "BaseBdev1", 00:08:19.373 "uuid": "ac367649-3ffe-4c33-92c7-29ea21d0b7ae", 00:08:19.373 "is_configured": true, 00:08:19.373 "data_offset": 0, 00:08:19.373 "data_size": 65536 00:08:19.373 }, 00:08:19.373 { 00:08:19.373 "name": "BaseBdev2", 00:08:19.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.373 "is_configured": false, 00:08:19.373 "data_offset": 0, 00:08:19.373 "data_size": 0 00:08:19.373 }, 00:08:19.373 { 00:08:19.373 "name": "BaseBdev3", 00:08:19.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.373 "is_configured": false, 00:08:19.373 "data_offset": 0, 00:08:19.373 "data_size": 0 00:08:19.373 }, 00:08:19.373 { 00:08:19.373 "name": "BaseBdev4", 00:08:19.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.373 "is_configured": false, 00:08:19.373 "data_offset": 0, 00:08:19.373 "data_size": 0 00:08:19.373 } 00:08:19.373 ] 00:08:19.373 }' 00:08:19.373 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.373 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.636 [2024-11-26 19:49:10.381170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.636 [2024-11-26 19:49:10.381397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.636 [2024-11-26 19:49:10.393224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.636 [2024-11-26 19:49:10.395290] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.636 [2024-11-26 19:49:10.395432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.636 [2024-11-26 19:49:10.395496] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.636 [2024-11-26 19:49:10.395530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.636 [2024-11-26 19:49:10.395647] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:19.636 [2024-11-26 19:49:10.395784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.636 "name": "Existed_Raid", 00:08:19.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.636 "strip_size_kb": 64, 00:08:19.636 "state": "configuring", 00:08:19.636 "raid_level": "concat", 00:08:19.636 "superblock": false, 00:08:19.636 "num_base_bdevs": 4, 00:08:19.636 "num_base_bdevs_discovered": 1, 00:08:19.636 "num_base_bdevs_operational": 4, 00:08:19.636 "base_bdevs_list": [ 00:08:19.636 { 00:08:19.636 "name": "BaseBdev1", 00:08:19.636 "uuid": "ac367649-3ffe-4c33-92c7-29ea21d0b7ae", 00:08:19.636 "is_configured": true, 00:08:19.636 "data_offset": 0, 00:08:19.636 "data_size": 65536 00:08:19.636 }, 00:08:19.636 { 00:08:19.636 "name": "BaseBdev2", 00:08:19.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.636 "is_configured": false, 00:08:19.636 "data_offset": 0, 00:08:19.636 "data_size": 0 00:08:19.636 }, 00:08:19.636 { 00:08:19.636 "name": "BaseBdev3", 00:08:19.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.636 "is_configured": false, 00:08:19.636 "data_offset": 0, 00:08:19.636 "data_size": 0 00:08:19.636 }, 00:08:19.636 { 00:08:19.636 "name": "BaseBdev4", 00:08:19.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.636 "is_configured": false, 00:08:19.636 "data_offset": 0, 00:08:19.636 "data_size": 0 00:08:19.636 } 00:08:19.636 ] 00:08:19.636 }' 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.636 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.897 [2024-11-26 19:49:10.749933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.897 BaseBdev2 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.897 [ 00:08:19.897 { 00:08:19.897 "name": "BaseBdev2", 00:08:19.897 "aliases": [ 00:08:19.897 "6f3c846d-bbbb-4e45-a4ef-3c1870e3cb2d" 00:08:19.897 ], 00:08:19.897 "product_name": "Malloc disk", 00:08:19.897 "block_size": 512, 00:08:19.897 "num_blocks": 65536, 00:08:19.897 "uuid": "6f3c846d-bbbb-4e45-a4ef-3c1870e3cb2d", 00:08:19.897 "assigned_rate_limits": { 00:08:19.897 "rw_ios_per_sec": 0, 00:08:19.897 "rw_mbytes_per_sec": 0, 00:08:19.897 "r_mbytes_per_sec": 0, 00:08:19.897 "w_mbytes_per_sec": 0 00:08:19.897 }, 00:08:19.897 "claimed": true, 00:08:19.897 "claim_type": "exclusive_write", 00:08:19.897 "zoned": false, 00:08:19.897 "supported_io_types": { 00:08:19.897 "read": true, 00:08:19.897 "write": true, 00:08:19.897 "unmap": true, 00:08:19.897 "flush": true, 00:08:19.897 "reset": true, 00:08:19.897 "nvme_admin": false, 00:08:19.897 "nvme_io": false, 00:08:19.897 "nvme_io_md": false, 00:08:19.897 "write_zeroes": true, 00:08:19.897 "zcopy": true, 00:08:19.897 "get_zone_info": false, 00:08:19.897 "zone_management": false, 00:08:19.897 "zone_append": false, 00:08:19.897 "compare": false, 00:08:19.897 "compare_and_write": false, 00:08:19.897 "abort": true, 00:08:19.897 "seek_hole": false, 00:08:19.897 "seek_data": false, 00:08:19.897 "copy": true, 00:08:19.897 "nvme_iov_md": false 00:08:19.897 }, 00:08:19.897 "memory_domains": [ 00:08:19.897 { 00:08:19.897 "dma_device_id": "system", 00:08:19.897 "dma_device_type": 1 00:08:19.897 }, 00:08:19.897 { 00:08:19.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.897 "dma_device_type": 2 00:08:19.897 } 00:08:19.897 ], 00:08:19.897 "driver_specific": {} 00:08:19.897 } 00:08:19.897 ] 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.897 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.897 "name": "Existed_Raid", 00:08:19.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.897 "strip_size_kb": 64, 00:08:19.897 "state": "configuring", 00:08:19.897 "raid_level": "concat", 00:08:19.897 "superblock": false, 00:08:19.897 "num_base_bdevs": 4, 00:08:19.897 "num_base_bdevs_discovered": 2, 00:08:19.897 "num_base_bdevs_operational": 4, 00:08:19.897 "base_bdevs_list": [ 00:08:19.897 { 00:08:19.897 "name": "BaseBdev1", 00:08:19.897 "uuid": "ac367649-3ffe-4c33-92c7-29ea21d0b7ae", 00:08:19.897 "is_configured": true, 00:08:19.898 "data_offset": 0, 00:08:19.898 "data_size": 65536 00:08:19.898 }, 00:08:19.898 { 00:08:19.898 "name": "BaseBdev2", 00:08:19.898 "uuid": "6f3c846d-bbbb-4e45-a4ef-3c1870e3cb2d", 00:08:19.898 "is_configured": true, 00:08:19.898 "data_offset": 0, 00:08:19.898 "data_size": 65536 00:08:19.898 }, 00:08:19.898 { 00:08:19.898 "name": "BaseBdev3", 00:08:19.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.898 "is_configured": false, 00:08:19.898 "data_offset": 0, 00:08:19.898 "data_size": 0 00:08:19.898 }, 00:08:19.898 { 00:08:19.898 "name": "BaseBdev4", 00:08:19.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.898 "is_configured": false, 00:08:19.898 "data_offset": 0, 00:08:19.898 "data_size": 0 00:08:19.898 } 00:08:19.898 ] 00:08:19.898 }' 00:08:19.898 19:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.898 19:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.468 [2024-11-26 19:49:11.187220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:20.468 BaseBdev3 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.468 [ 00:08:20.468 { 00:08:20.468 "name": "BaseBdev3", 00:08:20.468 "aliases": [ 00:08:20.468 "0a7812ec-52ec-4845-8e5e-68d4fa49aea7" 00:08:20.468 ], 00:08:20.468 "product_name": "Malloc disk", 00:08:20.468 "block_size": 512, 00:08:20.468 "num_blocks": 65536, 00:08:20.468 "uuid": "0a7812ec-52ec-4845-8e5e-68d4fa49aea7", 00:08:20.468 "assigned_rate_limits": { 00:08:20.468 "rw_ios_per_sec": 0, 00:08:20.468 "rw_mbytes_per_sec": 0, 00:08:20.468 "r_mbytes_per_sec": 0, 00:08:20.468 "w_mbytes_per_sec": 0 00:08:20.468 }, 00:08:20.468 "claimed": true, 00:08:20.468 "claim_type": "exclusive_write", 00:08:20.468 "zoned": false, 00:08:20.468 "supported_io_types": { 00:08:20.468 "read": true, 00:08:20.468 "write": true, 00:08:20.468 "unmap": true, 00:08:20.468 "flush": true, 00:08:20.468 "reset": true, 00:08:20.468 "nvme_admin": false, 00:08:20.468 "nvme_io": false, 00:08:20.468 "nvme_io_md": false, 00:08:20.468 "write_zeroes": true, 00:08:20.468 "zcopy": true, 00:08:20.468 "get_zone_info": false, 00:08:20.468 "zone_management": false, 00:08:20.468 "zone_append": false, 00:08:20.468 "compare": false, 00:08:20.468 "compare_and_write": false, 00:08:20.468 "abort": true, 00:08:20.468 "seek_hole": false, 00:08:20.468 "seek_data": false, 00:08:20.468 "copy": true, 00:08:20.468 "nvme_iov_md": false 00:08:20.468 }, 00:08:20.468 "memory_domains": [ 00:08:20.468 { 00:08:20.468 "dma_device_id": "system", 00:08:20.468 "dma_device_type": 1 00:08:20.468 }, 00:08:20.468 { 00:08:20.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.468 "dma_device_type": 2 00:08:20.468 } 00:08:20.468 ], 00:08:20.468 "driver_specific": {} 00:08:20.468 } 00:08:20.468 ] 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.468 "name": "Existed_Raid", 00:08:20.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.468 "strip_size_kb": 64, 00:08:20.468 "state": "configuring", 00:08:20.468 "raid_level": "concat", 00:08:20.468 "superblock": false, 00:08:20.468 "num_base_bdevs": 4, 00:08:20.468 "num_base_bdevs_discovered": 3, 00:08:20.468 "num_base_bdevs_operational": 4, 00:08:20.468 "base_bdevs_list": [ 00:08:20.468 { 00:08:20.468 "name": "BaseBdev1", 00:08:20.468 "uuid": "ac367649-3ffe-4c33-92c7-29ea21d0b7ae", 00:08:20.468 "is_configured": true, 00:08:20.468 "data_offset": 0, 00:08:20.468 "data_size": 65536 00:08:20.468 }, 00:08:20.468 { 00:08:20.468 "name": "BaseBdev2", 00:08:20.468 "uuid": "6f3c846d-bbbb-4e45-a4ef-3c1870e3cb2d", 00:08:20.468 "is_configured": true, 00:08:20.468 "data_offset": 0, 00:08:20.468 "data_size": 65536 00:08:20.468 }, 00:08:20.468 { 00:08:20.468 "name": "BaseBdev3", 00:08:20.468 "uuid": "0a7812ec-52ec-4845-8e5e-68d4fa49aea7", 00:08:20.468 "is_configured": true, 00:08:20.468 "data_offset": 0, 00:08:20.468 "data_size": 65536 00:08:20.468 }, 00:08:20.468 { 00:08:20.468 "name": "BaseBdev4", 00:08:20.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.468 "is_configured": false, 00:08:20.468 "data_offset": 0, 00:08:20.468 "data_size": 0 00:08:20.468 } 00:08:20.468 ] 00:08:20.468 }' 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.468 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.730 [2024-11-26 19:49:11.608074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:20.730 [2024-11-26 19:49:11.608267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:20.730 [2024-11-26 19:49:11.608301] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:20.730 [2024-11-26 19:49:11.609083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:20.730 [2024-11-26 19:49:11.609273] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:20.730 [2024-11-26 19:49:11.609290] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:20.730 [2024-11-26 19:49:11.609563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.730 BaseBdev4 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.730 [ 00:08:20.730 { 00:08:20.730 "name": "BaseBdev4", 00:08:20.730 "aliases": [ 00:08:20.730 "eda3f55f-c3ff-43ed-b675-571959d6aea2" 00:08:20.730 ], 00:08:20.730 "product_name": "Malloc disk", 00:08:20.730 "block_size": 512, 00:08:20.730 "num_blocks": 65536, 00:08:20.730 "uuid": "eda3f55f-c3ff-43ed-b675-571959d6aea2", 00:08:20.730 "assigned_rate_limits": { 00:08:20.730 "rw_ios_per_sec": 0, 00:08:20.730 "rw_mbytes_per_sec": 0, 00:08:20.730 "r_mbytes_per_sec": 0, 00:08:20.730 "w_mbytes_per_sec": 0 00:08:20.730 }, 00:08:20.730 "claimed": true, 00:08:20.730 "claim_type": "exclusive_write", 00:08:20.730 "zoned": false, 00:08:20.730 "supported_io_types": { 00:08:20.730 "read": true, 00:08:20.730 "write": true, 00:08:20.730 "unmap": true, 00:08:20.730 "flush": true, 00:08:20.730 "reset": true, 00:08:20.730 "nvme_admin": false, 00:08:20.730 "nvme_io": false, 00:08:20.730 "nvme_io_md": false, 00:08:20.730 "write_zeroes": true, 00:08:20.730 "zcopy": true, 00:08:20.730 "get_zone_info": false, 00:08:20.730 "zone_management": false, 00:08:20.730 "zone_append": false, 00:08:20.730 "compare": false, 00:08:20.730 "compare_and_write": false, 00:08:20.730 "abort": true, 00:08:20.730 "seek_hole": false, 00:08:20.730 "seek_data": false, 00:08:20.730 "copy": true, 00:08:20.730 "nvme_iov_md": false 00:08:20.730 }, 00:08:20.730 "memory_domains": [ 00:08:20.730 { 00:08:20.730 "dma_device_id": "system", 00:08:20.730 "dma_device_type": 1 00:08:20.730 }, 00:08:20.730 { 00:08:20.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.730 "dma_device_type": 2 00:08:20.730 } 00:08:20.730 ], 00:08:20.730 "driver_specific": {} 00:08:20.730 } 00:08:20.730 ] 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.730 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.991 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.991 "name": "Existed_Raid", 00:08:20.991 "uuid": "179ee39e-61fd-42cf-b52e-0ebcf8e46cb7", 00:08:20.991 "strip_size_kb": 64, 00:08:20.991 "state": "online", 00:08:20.991 "raid_level": "concat", 00:08:20.991 "superblock": false, 00:08:20.991 "num_base_bdevs": 4, 00:08:20.991 "num_base_bdevs_discovered": 4, 00:08:20.991 "num_base_bdevs_operational": 4, 00:08:20.991 "base_bdevs_list": [ 00:08:20.991 { 00:08:20.991 "name": "BaseBdev1", 00:08:20.991 "uuid": "ac367649-3ffe-4c33-92c7-29ea21d0b7ae", 00:08:20.991 "is_configured": true, 00:08:20.991 "data_offset": 0, 00:08:20.991 "data_size": 65536 00:08:20.991 }, 00:08:20.991 { 00:08:20.991 "name": "BaseBdev2", 00:08:20.991 "uuid": "6f3c846d-bbbb-4e45-a4ef-3c1870e3cb2d", 00:08:20.991 "is_configured": true, 00:08:20.991 "data_offset": 0, 00:08:20.991 "data_size": 65536 00:08:20.991 }, 00:08:20.991 { 00:08:20.991 "name": "BaseBdev3", 00:08:20.991 "uuid": "0a7812ec-52ec-4845-8e5e-68d4fa49aea7", 00:08:20.991 "is_configured": true, 00:08:20.991 "data_offset": 0, 00:08:20.991 "data_size": 65536 00:08:20.991 }, 00:08:20.991 { 00:08:20.991 "name": "BaseBdev4", 00:08:20.991 "uuid": "eda3f55f-c3ff-43ed-b675-571959d6aea2", 00:08:20.991 "is_configured": true, 00:08:20.991 "data_offset": 0, 00:08:20.991 "data_size": 65536 00:08:20.991 } 00:08:20.991 ] 00:08:20.991 }' 00:08:20.991 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.991 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.252 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:21.252 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:21.252 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.252 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.252 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.252 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.252 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:21.252 19:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.252 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.252 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.252 [2024-11-26 19:49:11.984636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.252 19:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.252 "name": "Existed_Raid", 00:08:21.252 "aliases": [ 00:08:21.252 "179ee39e-61fd-42cf-b52e-0ebcf8e46cb7" 00:08:21.252 ], 00:08:21.252 "product_name": "Raid Volume", 00:08:21.252 "block_size": 512, 00:08:21.252 "num_blocks": 262144, 00:08:21.252 "uuid": "179ee39e-61fd-42cf-b52e-0ebcf8e46cb7", 00:08:21.252 "assigned_rate_limits": { 00:08:21.252 "rw_ios_per_sec": 0, 00:08:21.252 "rw_mbytes_per_sec": 0, 00:08:21.252 "r_mbytes_per_sec": 0, 00:08:21.252 "w_mbytes_per_sec": 0 00:08:21.252 }, 00:08:21.252 "claimed": false, 00:08:21.252 "zoned": false, 00:08:21.252 "supported_io_types": { 00:08:21.252 "read": true, 00:08:21.252 "write": true, 00:08:21.252 "unmap": true, 00:08:21.252 "flush": true, 00:08:21.252 "reset": true, 00:08:21.252 "nvme_admin": false, 00:08:21.252 "nvme_io": false, 00:08:21.252 "nvme_io_md": false, 00:08:21.252 "write_zeroes": true, 00:08:21.252 "zcopy": false, 00:08:21.252 "get_zone_info": false, 00:08:21.252 "zone_management": false, 00:08:21.252 "zone_append": false, 00:08:21.252 "compare": false, 00:08:21.252 "compare_and_write": false, 00:08:21.252 "abort": false, 00:08:21.252 "seek_hole": false, 00:08:21.252 "seek_data": false, 00:08:21.252 "copy": false, 00:08:21.252 "nvme_iov_md": false 00:08:21.252 }, 00:08:21.252 "memory_domains": [ 00:08:21.252 { 00:08:21.252 "dma_device_id": "system", 00:08:21.252 "dma_device_type": 1 00:08:21.252 }, 00:08:21.252 { 00:08:21.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.252 "dma_device_type": 2 00:08:21.252 }, 00:08:21.252 { 00:08:21.252 "dma_device_id": "system", 00:08:21.252 "dma_device_type": 1 00:08:21.252 }, 00:08:21.252 { 00:08:21.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.252 "dma_device_type": 2 00:08:21.252 }, 00:08:21.252 { 00:08:21.252 "dma_device_id": "system", 00:08:21.252 "dma_device_type": 1 00:08:21.252 }, 00:08:21.252 { 00:08:21.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.252 "dma_device_type": 2 00:08:21.252 }, 00:08:21.252 { 00:08:21.252 "dma_device_id": "system", 00:08:21.252 "dma_device_type": 1 00:08:21.252 }, 00:08:21.252 { 00:08:21.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.252 "dma_device_type": 2 00:08:21.252 } 00:08:21.252 ], 00:08:21.252 "driver_specific": { 00:08:21.252 "raid": { 00:08:21.252 "uuid": "179ee39e-61fd-42cf-b52e-0ebcf8e46cb7", 00:08:21.252 "strip_size_kb": 64, 00:08:21.252 "state": "online", 00:08:21.252 "raid_level": "concat", 00:08:21.252 "superblock": false, 00:08:21.252 "num_base_bdevs": 4, 00:08:21.252 "num_base_bdevs_discovered": 4, 00:08:21.252 "num_base_bdevs_operational": 4, 00:08:21.252 "base_bdevs_list": [ 00:08:21.252 { 00:08:21.252 "name": "BaseBdev1", 00:08:21.252 "uuid": "ac367649-3ffe-4c33-92c7-29ea21d0b7ae", 00:08:21.252 "is_configured": true, 00:08:21.252 "data_offset": 0, 00:08:21.252 "data_size": 65536 00:08:21.252 }, 00:08:21.252 { 00:08:21.252 "name": "BaseBdev2", 00:08:21.252 "uuid": "6f3c846d-bbbb-4e45-a4ef-3c1870e3cb2d", 00:08:21.252 "is_configured": true, 00:08:21.252 "data_offset": 0, 00:08:21.252 "data_size": 65536 00:08:21.252 }, 00:08:21.252 { 00:08:21.252 "name": "BaseBdev3", 00:08:21.252 "uuid": "0a7812ec-52ec-4845-8e5e-68d4fa49aea7", 00:08:21.252 "is_configured": true, 00:08:21.252 "data_offset": 0, 00:08:21.252 "data_size": 65536 00:08:21.252 }, 00:08:21.252 { 00:08:21.252 "name": "BaseBdev4", 00:08:21.252 "uuid": "eda3f55f-c3ff-43ed-b675-571959d6aea2", 00:08:21.252 "is_configured": true, 00:08:21.252 "data_offset": 0, 00:08:21.252 "data_size": 65536 00:08:21.252 } 00:08:21.252 ] 00:08:21.252 } 00:08:21.252 } 00:08:21.252 }' 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:21.252 BaseBdev2 00:08:21.252 BaseBdev3 00:08:21.252 BaseBdev4' 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.252 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.511 [2024-11-26 19:49:12.216323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:21.511 [2024-11-26 19:49:12.216367] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.511 [2024-11-26 19:49:12.216423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:21.511 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.512 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.512 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.512 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.512 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.512 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.512 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.512 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.512 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.512 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.512 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.512 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.512 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.512 "name": "Existed_Raid", 00:08:21.512 "uuid": "179ee39e-61fd-42cf-b52e-0ebcf8e46cb7", 00:08:21.512 "strip_size_kb": 64, 00:08:21.512 "state": "offline", 00:08:21.512 "raid_level": "concat", 00:08:21.512 "superblock": false, 00:08:21.512 "num_base_bdevs": 4, 00:08:21.512 "num_base_bdevs_discovered": 3, 00:08:21.512 "num_base_bdevs_operational": 3, 00:08:21.512 "base_bdevs_list": [ 00:08:21.512 { 00:08:21.512 "name": null, 00:08:21.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.512 "is_configured": false, 00:08:21.512 "data_offset": 0, 00:08:21.512 "data_size": 65536 00:08:21.512 }, 00:08:21.512 { 00:08:21.512 "name": "BaseBdev2", 00:08:21.512 "uuid": "6f3c846d-bbbb-4e45-a4ef-3c1870e3cb2d", 00:08:21.512 "is_configured": true, 00:08:21.512 "data_offset": 0, 00:08:21.512 "data_size": 65536 00:08:21.512 }, 00:08:21.512 { 00:08:21.512 "name": "BaseBdev3", 00:08:21.512 "uuid": "0a7812ec-52ec-4845-8e5e-68d4fa49aea7", 00:08:21.512 "is_configured": true, 00:08:21.512 "data_offset": 0, 00:08:21.512 "data_size": 65536 00:08:21.512 }, 00:08:21.512 { 00:08:21.512 "name": "BaseBdev4", 00:08:21.512 "uuid": "eda3f55f-c3ff-43ed-b675-571959d6aea2", 00:08:21.512 "is_configured": true, 00:08:21.512 "data_offset": 0, 00:08:21.512 "data_size": 65536 00:08:21.512 } 00:08:21.512 ] 00:08:21.512 }' 00:08:21.512 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.512 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.771 [2024-11-26 19:49:12.615146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.771 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.032 [2024-11-26 19:49:12.717412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.032 [2024-11-26 19:49:12.819225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:22.032 [2024-11-26 19:49:12.819276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.032 BaseBdev2 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.032 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.292 [ 00:08:22.292 { 00:08:22.292 "name": "BaseBdev2", 00:08:22.292 "aliases": [ 00:08:22.292 "e110d2ed-9771-4a73-b86e-63ee66190a54" 00:08:22.292 ], 00:08:22.292 "product_name": "Malloc disk", 00:08:22.292 "block_size": 512, 00:08:22.292 "num_blocks": 65536, 00:08:22.292 "uuid": "e110d2ed-9771-4a73-b86e-63ee66190a54", 00:08:22.292 "assigned_rate_limits": { 00:08:22.292 "rw_ios_per_sec": 0, 00:08:22.292 "rw_mbytes_per_sec": 0, 00:08:22.292 "r_mbytes_per_sec": 0, 00:08:22.292 "w_mbytes_per_sec": 0 00:08:22.292 }, 00:08:22.292 "claimed": false, 00:08:22.292 "zoned": false, 00:08:22.292 "supported_io_types": { 00:08:22.292 "read": true, 00:08:22.292 "write": true, 00:08:22.292 "unmap": true, 00:08:22.292 "flush": true, 00:08:22.292 "reset": true, 00:08:22.292 "nvme_admin": false, 00:08:22.292 "nvme_io": false, 00:08:22.292 "nvme_io_md": false, 00:08:22.292 "write_zeroes": true, 00:08:22.292 "zcopy": true, 00:08:22.292 "get_zone_info": false, 00:08:22.292 "zone_management": false, 00:08:22.292 "zone_append": false, 00:08:22.292 "compare": false, 00:08:22.292 "compare_and_write": false, 00:08:22.292 "abort": true, 00:08:22.292 "seek_hole": false, 00:08:22.292 "seek_data": false, 00:08:22.292 "copy": true, 00:08:22.292 "nvme_iov_md": false 00:08:22.292 }, 00:08:22.292 "memory_domains": [ 00:08:22.292 { 00:08:22.292 "dma_device_id": "system", 00:08:22.292 "dma_device_type": 1 00:08:22.292 }, 00:08:22.292 { 00:08:22.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.292 "dma_device_type": 2 00:08:22.292 } 00:08:22.292 ], 00:08:22.292 "driver_specific": {} 00:08:22.292 } 00:08:22.292 ] 00:08:22.292 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.292 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:22.292 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:22.292 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.292 19:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:22.292 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.292 19:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.292 BaseBdev3 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.292 [ 00:08:22.292 { 00:08:22.292 "name": "BaseBdev3", 00:08:22.292 "aliases": [ 00:08:22.292 "60177186-8ec2-452f-9d5a-919cb7e5fb4d" 00:08:22.292 ], 00:08:22.292 "product_name": "Malloc disk", 00:08:22.292 "block_size": 512, 00:08:22.292 "num_blocks": 65536, 00:08:22.292 "uuid": "60177186-8ec2-452f-9d5a-919cb7e5fb4d", 00:08:22.292 "assigned_rate_limits": { 00:08:22.292 "rw_ios_per_sec": 0, 00:08:22.292 "rw_mbytes_per_sec": 0, 00:08:22.292 "r_mbytes_per_sec": 0, 00:08:22.292 "w_mbytes_per_sec": 0 00:08:22.292 }, 00:08:22.292 "claimed": false, 00:08:22.292 "zoned": false, 00:08:22.292 "supported_io_types": { 00:08:22.292 "read": true, 00:08:22.292 "write": true, 00:08:22.292 "unmap": true, 00:08:22.292 "flush": true, 00:08:22.292 "reset": true, 00:08:22.292 "nvme_admin": false, 00:08:22.292 "nvme_io": false, 00:08:22.292 "nvme_io_md": false, 00:08:22.292 "write_zeroes": true, 00:08:22.292 "zcopy": true, 00:08:22.292 "get_zone_info": false, 00:08:22.292 "zone_management": false, 00:08:22.292 "zone_append": false, 00:08:22.292 "compare": false, 00:08:22.292 "compare_and_write": false, 00:08:22.292 "abort": true, 00:08:22.292 "seek_hole": false, 00:08:22.292 "seek_data": false, 00:08:22.292 "copy": true, 00:08:22.292 "nvme_iov_md": false 00:08:22.292 }, 00:08:22.292 "memory_domains": [ 00:08:22.292 { 00:08:22.292 "dma_device_id": "system", 00:08:22.292 "dma_device_type": 1 00:08:22.292 }, 00:08:22.292 { 00:08:22.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.292 "dma_device_type": 2 00:08:22.292 } 00:08:22.292 ], 00:08:22.292 "driver_specific": {} 00:08:22.292 } 00:08:22.292 ] 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.292 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.292 BaseBdev4 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.293 [ 00:08:22.293 { 00:08:22.293 "name": "BaseBdev4", 00:08:22.293 "aliases": [ 00:08:22.293 "3b5e0ba7-9075-42e8-9329-497b91241354" 00:08:22.293 ], 00:08:22.293 "product_name": "Malloc disk", 00:08:22.293 "block_size": 512, 00:08:22.293 "num_blocks": 65536, 00:08:22.293 "uuid": "3b5e0ba7-9075-42e8-9329-497b91241354", 00:08:22.293 "assigned_rate_limits": { 00:08:22.293 "rw_ios_per_sec": 0, 00:08:22.293 "rw_mbytes_per_sec": 0, 00:08:22.293 "r_mbytes_per_sec": 0, 00:08:22.293 "w_mbytes_per_sec": 0 00:08:22.293 }, 00:08:22.293 "claimed": false, 00:08:22.293 "zoned": false, 00:08:22.293 "supported_io_types": { 00:08:22.293 "read": true, 00:08:22.293 "write": true, 00:08:22.293 "unmap": true, 00:08:22.293 "flush": true, 00:08:22.293 "reset": true, 00:08:22.293 "nvme_admin": false, 00:08:22.293 "nvme_io": false, 00:08:22.293 "nvme_io_md": false, 00:08:22.293 "write_zeroes": true, 00:08:22.293 "zcopy": true, 00:08:22.293 "get_zone_info": false, 00:08:22.293 "zone_management": false, 00:08:22.293 "zone_append": false, 00:08:22.293 "compare": false, 00:08:22.293 "compare_and_write": false, 00:08:22.293 "abort": true, 00:08:22.293 "seek_hole": false, 00:08:22.293 "seek_data": false, 00:08:22.293 "copy": true, 00:08:22.293 "nvme_iov_md": false 00:08:22.293 }, 00:08:22.293 "memory_domains": [ 00:08:22.293 { 00:08:22.293 "dma_device_id": "system", 00:08:22.293 "dma_device_type": 1 00:08:22.293 }, 00:08:22.293 { 00:08:22.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.293 "dma_device_type": 2 00:08:22.293 } 00:08:22.293 ], 00:08:22.293 "driver_specific": {} 00:08:22.293 } 00:08:22.293 ] 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.293 [2024-11-26 19:49:13.100846] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.293 [2024-11-26 19:49:13.100900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.293 [2024-11-26 19:49:13.100921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.293 [2024-11-26 19:49:13.102901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.293 [2024-11-26 19:49:13.102971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.293 "name": "Existed_Raid", 00:08:22.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.293 "strip_size_kb": 64, 00:08:22.293 "state": "configuring", 00:08:22.293 "raid_level": "concat", 00:08:22.293 "superblock": false, 00:08:22.293 "num_base_bdevs": 4, 00:08:22.293 "num_base_bdevs_discovered": 3, 00:08:22.293 "num_base_bdevs_operational": 4, 00:08:22.293 "base_bdevs_list": [ 00:08:22.293 { 00:08:22.293 "name": "BaseBdev1", 00:08:22.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.293 "is_configured": false, 00:08:22.293 "data_offset": 0, 00:08:22.293 "data_size": 0 00:08:22.293 }, 00:08:22.293 { 00:08:22.293 "name": "BaseBdev2", 00:08:22.293 "uuid": "e110d2ed-9771-4a73-b86e-63ee66190a54", 00:08:22.293 "is_configured": true, 00:08:22.293 "data_offset": 0, 00:08:22.293 "data_size": 65536 00:08:22.293 }, 00:08:22.293 { 00:08:22.293 "name": "BaseBdev3", 00:08:22.293 "uuid": "60177186-8ec2-452f-9d5a-919cb7e5fb4d", 00:08:22.293 "is_configured": true, 00:08:22.293 "data_offset": 0, 00:08:22.293 "data_size": 65536 00:08:22.293 }, 00:08:22.293 { 00:08:22.293 "name": "BaseBdev4", 00:08:22.293 "uuid": "3b5e0ba7-9075-42e8-9329-497b91241354", 00:08:22.293 "is_configured": true, 00:08:22.293 "data_offset": 0, 00:08:22.293 "data_size": 65536 00:08:22.293 } 00:08:22.293 ] 00:08:22.293 }' 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.293 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.552 [2024-11-26 19:49:13.428967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.552 "name": "Existed_Raid", 00:08:22.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.552 "strip_size_kb": 64, 00:08:22.552 "state": "configuring", 00:08:22.552 "raid_level": "concat", 00:08:22.552 "superblock": false, 00:08:22.552 "num_base_bdevs": 4, 00:08:22.552 "num_base_bdevs_discovered": 2, 00:08:22.552 "num_base_bdevs_operational": 4, 00:08:22.552 "base_bdevs_list": [ 00:08:22.552 { 00:08:22.552 "name": "BaseBdev1", 00:08:22.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.552 "is_configured": false, 00:08:22.552 "data_offset": 0, 00:08:22.552 "data_size": 0 00:08:22.552 }, 00:08:22.552 { 00:08:22.552 "name": null, 00:08:22.552 "uuid": "e110d2ed-9771-4a73-b86e-63ee66190a54", 00:08:22.552 "is_configured": false, 00:08:22.552 "data_offset": 0, 00:08:22.552 "data_size": 65536 00:08:22.552 }, 00:08:22.552 { 00:08:22.552 "name": "BaseBdev3", 00:08:22.552 "uuid": "60177186-8ec2-452f-9d5a-919cb7e5fb4d", 00:08:22.552 "is_configured": true, 00:08:22.552 "data_offset": 0, 00:08:22.552 "data_size": 65536 00:08:22.552 }, 00:08:22.552 { 00:08:22.552 "name": "BaseBdev4", 00:08:22.552 "uuid": "3b5e0ba7-9075-42e8-9329-497b91241354", 00:08:22.552 "is_configured": true, 00:08:22.552 "data_offset": 0, 00:08:22.552 "data_size": 65536 00:08:22.552 } 00:08:22.552 ] 00:08:22.552 }' 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.552 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.812 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.812 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:22.812 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.812 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.072 [2024-11-26 19:49:13.801849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.072 BaseBdev1 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.072 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.072 [ 00:08:23.072 { 00:08:23.072 "name": "BaseBdev1", 00:08:23.072 "aliases": [ 00:08:23.072 "e929d4b1-e36c-452d-9641-f53674083aba" 00:08:23.072 ], 00:08:23.072 "product_name": "Malloc disk", 00:08:23.072 "block_size": 512, 00:08:23.073 "num_blocks": 65536, 00:08:23.073 "uuid": "e929d4b1-e36c-452d-9641-f53674083aba", 00:08:23.073 "assigned_rate_limits": { 00:08:23.073 "rw_ios_per_sec": 0, 00:08:23.073 "rw_mbytes_per_sec": 0, 00:08:23.073 "r_mbytes_per_sec": 0, 00:08:23.073 "w_mbytes_per_sec": 0 00:08:23.073 }, 00:08:23.073 "claimed": true, 00:08:23.073 "claim_type": "exclusive_write", 00:08:23.073 "zoned": false, 00:08:23.073 "supported_io_types": { 00:08:23.073 "read": true, 00:08:23.073 "write": true, 00:08:23.073 "unmap": true, 00:08:23.073 "flush": true, 00:08:23.073 "reset": true, 00:08:23.073 "nvme_admin": false, 00:08:23.073 "nvme_io": false, 00:08:23.073 "nvme_io_md": false, 00:08:23.073 "write_zeroes": true, 00:08:23.073 "zcopy": true, 00:08:23.073 "get_zone_info": false, 00:08:23.073 "zone_management": false, 00:08:23.073 "zone_append": false, 00:08:23.073 "compare": false, 00:08:23.073 "compare_and_write": false, 00:08:23.073 "abort": true, 00:08:23.073 "seek_hole": false, 00:08:23.073 "seek_data": false, 00:08:23.073 "copy": true, 00:08:23.073 "nvme_iov_md": false 00:08:23.073 }, 00:08:23.073 "memory_domains": [ 00:08:23.073 { 00:08:23.073 "dma_device_id": "system", 00:08:23.073 "dma_device_type": 1 00:08:23.073 }, 00:08:23.073 { 00:08:23.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.073 "dma_device_type": 2 00:08:23.073 } 00:08:23.073 ], 00:08:23.073 "driver_specific": {} 00:08:23.073 } 00:08:23.073 ] 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.073 "name": "Existed_Raid", 00:08:23.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.073 "strip_size_kb": 64, 00:08:23.073 "state": "configuring", 00:08:23.073 "raid_level": "concat", 00:08:23.073 "superblock": false, 00:08:23.073 "num_base_bdevs": 4, 00:08:23.073 "num_base_bdevs_discovered": 3, 00:08:23.073 "num_base_bdevs_operational": 4, 00:08:23.073 "base_bdevs_list": [ 00:08:23.073 { 00:08:23.073 "name": "BaseBdev1", 00:08:23.073 "uuid": "e929d4b1-e36c-452d-9641-f53674083aba", 00:08:23.073 "is_configured": true, 00:08:23.073 "data_offset": 0, 00:08:23.073 "data_size": 65536 00:08:23.073 }, 00:08:23.073 { 00:08:23.073 "name": null, 00:08:23.073 "uuid": "e110d2ed-9771-4a73-b86e-63ee66190a54", 00:08:23.073 "is_configured": false, 00:08:23.073 "data_offset": 0, 00:08:23.073 "data_size": 65536 00:08:23.073 }, 00:08:23.073 { 00:08:23.073 "name": "BaseBdev3", 00:08:23.073 "uuid": "60177186-8ec2-452f-9d5a-919cb7e5fb4d", 00:08:23.073 "is_configured": true, 00:08:23.073 "data_offset": 0, 00:08:23.073 "data_size": 65536 00:08:23.073 }, 00:08:23.073 { 00:08:23.073 "name": "BaseBdev4", 00:08:23.073 "uuid": "3b5e0ba7-9075-42e8-9329-497b91241354", 00:08:23.073 "is_configured": true, 00:08:23.073 "data_offset": 0, 00:08:23.073 "data_size": 65536 00:08:23.073 } 00:08:23.073 ] 00:08:23.073 }' 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.073 19:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.332 [2024-11-26 19:49:14.166008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.332 "name": "Existed_Raid", 00:08:23.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.332 "strip_size_kb": 64, 00:08:23.332 "state": "configuring", 00:08:23.332 "raid_level": "concat", 00:08:23.332 "superblock": false, 00:08:23.332 "num_base_bdevs": 4, 00:08:23.332 "num_base_bdevs_discovered": 2, 00:08:23.332 "num_base_bdevs_operational": 4, 00:08:23.332 "base_bdevs_list": [ 00:08:23.332 { 00:08:23.332 "name": "BaseBdev1", 00:08:23.332 "uuid": "e929d4b1-e36c-452d-9641-f53674083aba", 00:08:23.332 "is_configured": true, 00:08:23.332 "data_offset": 0, 00:08:23.332 "data_size": 65536 00:08:23.332 }, 00:08:23.332 { 00:08:23.332 "name": null, 00:08:23.332 "uuid": "e110d2ed-9771-4a73-b86e-63ee66190a54", 00:08:23.332 "is_configured": false, 00:08:23.332 "data_offset": 0, 00:08:23.332 "data_size": 65536 00:08:23.332 }, 00:08:23.332 { 00:08:23.332 "name": null, 00:08:23.332 "uuid": "60177186-8ec2-452f-9d5a-919cb7e5fb4d", 00:08:23.332 "is_configured": false, 00:08:23.332 "data_offset": 0, 00:08:23.332 "data_size": 65536 00:08:23.332 }, 00:08:23.332 { 00:08:23.332 "name": "BaseBdev4", 00:08:23.332 "uuid": "3b5e0ba7-9075-42e8-9329-497b91241354", 00:08:23.332 "is_configured": true, 00:08:23.332 "data_offset": 0, 00:08:23.332 "data_size": 65536 00:08:23.332 } 00:08:23.332 ] 00:08:23.332 }' 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.332 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.592 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.592 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.592 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.592 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:23.592 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.592 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:23.592 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:23.592 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.592 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.853 [2024-11-26 19:49:14.526099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.853 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.853 "name": "Existed_Raid", 00:08:23.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.853 "strip_size_kb": 64, 00:08:23.853 "state": "configuring", 00:08:23.853 "raid_level": "concat", 00:08:23.853 "superblock": false, 00:08:23.853 "num_base_bdevs": 4, 00:08:23.853 "num_base_bdevs_discovered": 3, 00:08:23.854 "num_base_bdevs_operational": 4, 00:08:23.854 "base_bdevs_list": [ 00:08:23.854 { 00:08:23.854 "name": "BaseBdev1", 00:08:23.854 "uuid": "e929d4b1-e36c-452d-9641-f53674083aba", 00:08:23.854 "is_configured": true, 00:08:23.854 "data_offset": 0, 00:08:23.854 "data_size": 65536 00:08:23.854 }, 00:08:23.854 { 00:08:23.854 "name": null, 00:08:23.854 "uuid": "e110d2ed-9771-4a73-b86e-63ee66190a54", 00:08:23.854 "is_configured": false, 00:08:23.854 "data_offset": 0, 00:08:23.854 "data_size": 65536 00:08:23.854 }, 00:08:23.854 { 00:08:23.854 "name": "BaseBdev3", 00:08:23.854 "uuid": "60177186-8ec2-452f-9d5a-919cb7e5fb4d", 00:08:23.854 "is_configured": true, 00:08:23.854 "data_offset": 0, 00:08:23.854 "data_size": 65536 00:08:23.854 }, 00:08:23.854 { 00:08:23.854 "name": "BaseBdev4", 00:08:23.854 "uuid": "3b5e0ba7-9075-42e8-9329-497b91241354", 00:08:23.854 "is_configured": true, 00:08:23.854 "data_offset": 0, 00:08:23.854 "data_size": 65536 00:08:23.854 } 00:08:23.854 ] 00:08:23.854 }' 00:08:23.854 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.854 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.114 [2024-11-26 19:49:14.898220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.114 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.114 "name": "Existed_Raid", 00:08:24.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.114 "strip_size_kb": 64, 00:08:24.114 "state": "configuring", 00:08:24.114 "raid_level": "concat", 00:08:24.114 "superblock": false, 00:08:24.115 "num_base_bdevs": 4, 00:08:24.115 "num_base_bdevs_discovered": 2, 00:08:24.115 "num_base_bdevs_operational": 4, 00:08:24.115 "base_bdevs_list": [ 00:08:24.115 { 00:08:24.115 "name": null, 00:08:24.115 "uuid": "e929d4b1-e36c-452d-9641-f53674083aba", 00:08:24.115 "is_configured": false, 00:08:24.115 "data_offset": 0, 00:08:24.115 "data_size": 65536 00:08:24.115 }, 00:08:24.115 { 00:08:24.115 "name": null, 00:08:24.115 "uuid": "e110d2ed-9771-4a73-b86e-63ee66190a54", 00:08:24.115 "is_configured": false, 00:08:24.115 "data_offset": 0, 00:08:24.115 "data_size": 65536 00:08:24.115 }, 00:08:24.115 { 00:08:24.115 "name": "BaseBdev3", 00:08:24.115 "uuid": "60177186-8ec2-452f-9d5a-919cb7e5fb4d", 00:08:24.115 "is_configured": true, 00:08:24.115 "data_offset": 0, 00:08:24.115 "data_size": 65536 00:08:24.115 }, 00:08:24.115 { 00:08:24.115 "name": "BaseBdev4", 00:08:24.115 "uuid": "3b5e0ba7-9075-42e8-9329-497b91241354", 00:08:24.115 "is_configured": true, 00:08:24.115 "data_offset": 0, 00:08:24.115 "data_size": 65536 00:08:24.115 } 00:08:24.115 ] 00:08:24.115 }' 00:08:24.115 19:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.115 19:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.376 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.376 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:24.376 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.376 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.376 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.376 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:24.376 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:24.376 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.376 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.376 [2024-11-26 19:49:15.308815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.637 "name": "Existed_Raid", 00:08:24.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.637 "strip_size_kb": 64, 00:08:24.637 "state": "configuring", 00:08:24.637 "raid_level": "concat", 00:08:24.637 "superblock": false, 00:08:24.637 "num_base_bdevs": 4, 00:08:24.637 "num_base_bdevs_discovered": 3, 00:08:24.637 "num_base_bdevs_operational": 4, 00:08:24.637 "base_bdevs_list": [ 00:08:24.637 { 00:08:24.637 "name": null, 00:08:24.637 "uuid": "e929d4b1-e36c-452d-9641-f53674083aba", 00:08:24.637 "is_configured": false, 00:08:24.637 "data_offset": 0, 00:08:24.637 "data_size": 65536 00:08:24.637 }, 00:08:24.637 { 00:08:24.637 "name": "BaseBdev2", 00:08:24.637 "uuid": "e110d2ed-9771-4a73-b86e-63ee66190a54", 00:08:24.637 "is_configured": true, 00:08:24.637 "data_offset": 0, 00:08:24.637 "data_size": 65536 00:08:24.637 }, 00:08:24.637 { 00:08:24.637 "name": "BaseBdev3", 00:08:24.637 "uuid": "60177186-8ec2-452f-9d5a-919cb7e5fb4d", 00:08:24.637 "is_configured": true, 00:08:24.637 "data_offset": 0, 00:08:24.637 "data_size": 65536 00:08:24.637 }, 00:08:24.637 { 00:08:24.637 "name": "BaseBdev4", 00:08:24.637 "uuid": "3b5e0ba7-9075-42e8-9329-497b91241354", 00:08:24.637 "is_configured": true, 00:08:24.637 "data_offset": 0, 00:08:24.637 "data_size": 65536 00:08:24.637 } 00:08:24.637 ] 00:08:24.637 }' 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.637 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e929d4b1-e36c-452d-9641-f53674083aba 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.898 [2024-11-26 19:49:15.721598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:24.898 [2024-11-26 19:49:15.721646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:24.898 [2024-11-26 19:49:15.721653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:24.898 [2024-11-26 19:49:15.721920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:24.898 [2024-11-26 19:49:15.722053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:24.898 [2024-11-26 19:49:15.722063] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:24.898 [2024-11-26 19:49:15.722281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.898 NewBaseBdev 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.898 [ 00:08:24.898 { 00:08:24.898 "name": "NewBaseBdev", 00:08:24.898 "aliases": [ 00:08:24.898 "e929d4b1-e36c-452d-9641-f53674083aba" 00:08:24.898 ], 00:08:24.898 "product_name": "Malloc disk", 00:08:24.898 "block_size": 512, 00:08:24.898 "num_blocks": 65536, 00:08:24.898 "uuid": "e929d4b1-e36c-452d-9641-f53674083aba", 00:08:24.898 "assigned_rate_limits": { 00:08:24.898 "rw_ios_per_sec": 0, 00:08:24.898 "rw_mbytes_per_sec": 0, 00:08:24.898 "r_mbytes_per_sec": 0, 00:08:24.898 "w_mbytes_per_sec": 0 00:08:24.898 }, 00:08:24.898 "claimed": true, 00:08:24.898 "claim_type": "exclusive_write", 00:08:24.898 "zoned": false, 00:08:24.898 "supported_io_types": { 00:08:24.898 "read": true, 00:08:24.898 "write": true, 00:08:24.898 "unmap": true, 00:08:24.898 "flush": true, 00:08:24.898 "reset": true, 00:08:24.898 "nvme_admin": false, 00:08:24.898 "nvme_io": false, 00:08:24.898 "nvme_io_md": false, 00:08:24.898 "write_zeroes": true, 00:08:24.898 "zcopy": true, 00:08:24.898 "get_zone_info": false, 00:08:24.898 "zone_management": false, 00:08:24.898 "zone_append": false, 00:08:24.898 "compare": false, 00:08:24.898 "compare_and_write": false, 00:08:24.898 "abort": true, 00:08:24.898 "seek_hole": false, 00:08:24.898 "seek_data": false, 00:08:24.898 "copy": true, 00:08:24.898 "nvme_iov_md": false 00:08:24.898 }, 00:08:24.898 "memory_domains": [ 00:08:24.898 { 00:08:24.898 "dma_device_id": "system", 00:08:24.898 "dma_device_type": 1 00:08:24.898 }, 00:08:24.898 { 00:08:24.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.898 "dma_device_type": 2 00:08:24.898 } 00:08:24.898 ], 00:08:24.898 "driver_specific": {} 00:08:24.898 } 00:08:24.898 ] 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.898 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.899 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.899 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.899 "name": "Existed_Raid", 00:08:24.899 "uuid": "b11005d2-d94d-4602-99e3-02d2142d6253", 00:08:24.899 "strip_size_kb": 64, 00:08:24.899 "state": "online", 00:08:24.899 "raid_level": "concat", 00:08:24.899 "superblock": false, 00:08:24.899 "num_base_bdevs": 4, 00:08:24.899 "num_base_bdevs_discovered": 4, 00:08:24.899 "num_base_bdevs_operational": 4, 00:08:24.899 "base_bdevs_list": [ 00:08:24.899 { 00:08:24.899 "name": "NewBaseBdev", 00:08:24.899 "uuid": "e929d4b1-e36c-452d-9641-f53674083aba", 00:08:24.899 "is_configured": true, 00:08:24.899 "data_offset": 0, 00:08:24.899 "data_size": 65536 00:08:24.899 }, 00:08:24.899 { 00:08:24.899 "name": "BaseBdev2", 00:08:24.899 "uuid": "e110d2ed-9771-4a73-b86e-63ee66190a54", 00:08:24.899 "is_configured": true, 00:08:24.899 "data_offset": 0, 00:08:24.899 "data_size": 65536 00:08:24.899 }, 00:08:24.899 { 00:08:24.899 "name": "BaseBdev3", 00:08:24.899 "uuid": "60177186-8ec2-452f-9d5a-919cb7e5fb4d", 00:08:24.899 "is_configured": true, 00:08:24.899 "data_offset": 0, 00:08:24.899 "data_size": 65536 00:08:24.899 }, 00:08:24.899 { 00:08:24.899 "name": "BaseBdev4", 00:08:24.899 "uuid": "3b5e0ba7-9075-42e8-9329-497b91241354", 00:08:24.899 "is_configured": true, 00:08:24.899 "data_offset": 0, 00:08:24.899 "data_size": 65536 00:08:24.899 } 00:08:24.899 ] 00:08:24.899 }' 00:08:24.899 19:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.899 19:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.165 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:25.165 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:25.165 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:25.165 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:25.165 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:25.165 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:25.165 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:25.165 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:25.165 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.165 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.165 [2024-11-26 19:49:16.058126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.165 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.165 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:25.165 "name": "Existed_Raid", 00:08:25.165 "aliases": [ 00:08:25.165 "b11005d2-d94d-4602-99e3-02d2142d6253" 00:08:25.165 ], 00:08:25.165 "product_name": "Raid Volume", 00:08:25.165 "block_size": 512, 00:08:25.165 "num_blocks": 262144, 00:08:25.165 "uuid": "b11005d2-d94d-4602-99e3-02d2142d6253", 00:08:25.165 "assigned_rate_limits": { 00:08:25.165 "rw_ios_per_sec": 0, 00:08:25.165 "rw_mbytes_per_sec": 0, 00:08:25.165 "r_mbytes_per_sec": 0, 00:08:25.165 "w_mbytes_per_sec": 0 00:08:25.165 }, 00:08:25.165 "claimed": false, 00:08:25.165 "zoned": false, 00:08:25.165 "supported_io_types": { 00:08:25.165 "read": true, 00:08:25.165 "write": true, 00:08:25.165 "unmap": true, 00:08:25.165 "flush": true, 00:08:25.165 "reset": true, 00:08:25.165 "nvme_admin": false, 00:08:25.165 "nvme_io": false, 00:08:25.165 "nvme_io_md": false, 00:08:25.165 "write_zeroes": true, 00:08:25.165 "zcopy": false, 00:08:25.165 "get_zone_info": false, 00:08:25.165 "zone_management": false, 00:08:25.165 "zone_append": false, 00:08:25.165 "compare": false, 00:08:25.165 "compare_and_write": false, 00:08:25.165 "abort": false, 00:08:25.165 "seek_hole": false, 00:08:25.165 "seek_data": false, 00:08:25.165 "copy": false, 00:08:25.165 "nvme_iov_md": false 00:08:25.165 }, 00:08:25.165 "memory_domains": [ 00:08:25.165 { 00:08:25.165 "dma_device_id": "system", 00:08:25.165 "dma_device_type": 1 00:08:25.165 }, 00:08:25.165 { 00:08:25.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.165 "dma_device_type": 2 00:08:25.165 }, 00:08:25.165 { 00:08:25.165 "dma_device_id": "system", 00:08:25.165 "dma_device_type": 1 00:08:25.165 }, 00:08:25.165 { 00:08:25.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.165 "dma_device_type": 2 00:08:25.165 }, 00:08:25.165 { 00:08:25.165 "dma_device_id": "system", 00:08:25.165 "dma_device_type": 1 00:08:25.165 }, 00:08:25.165 { 00:08:25.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.165 "dma_device_type": 2 00:08:25.165 }, 00:08:25.165 { 00:08:25.165 "dma_device_id": "system", 00:08:25.165 "dma_device_type": 1 00:08:25.165 }, 00:08:25.165 { 00:08:25.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.165 "dma_device_type": 2 00:08:25.165 } 00:08:25.165 ], 00:08:25.165 "driver_specific": { 00:08:25.165 "raid": { 00:08:25.165 "uuid": "b11005d2-d94d-4602-99e3-02d2142d6253", 00:08:25.165 "strip_size_kb": 64, 00:08:25.165 "state": "online", 00:08:25.165 "raid_level": "concat", 00:08:25.165 "superblock": false, 00:08:25.165 "num_base_bdevs": 4, 00:08:25.165 "num_base_bdevs_discovered": 4, 00:08:25.165 "num_base_bdevs_operational": 4, 00:08:25.165 "base_bdevs_list": [ 00:08:25.165 { 00:08:25.165 "name": "NewBaseBdev", 00:08:25.165 "uuid": "e929d4b1-e36c-452d-9641-f53674083aba", 00:08:25.165 "is_configured": true, 00:08:25.165 "data_offset": 0, 00:08:25.165 "data_size": 65536 00:08:25.165 }, 00:08:25.165 { 00:08:25.165 "name": "BaseBdev2", 00:08:25.165 "uuid": "e110d2ed-9771-4a73-b86e-63ee66190a54", 00:08:25.165 "is_configured": true, 00:08:25.165 "data_offset": 0, 00:08:25.165 "data_size": 65536 00:08:25.165 }, 00:08:25.165 { 00:08:25.165 "name": "BaseBdev3", 00:08:25.165 "uuid": "60177186-8ec2-452f-9d5a-919cb7e5fb4d", 00:08:25.166 "is_configured": true, 00:08:25.166 "data_offset": 0, 00:08:25.166 "data_size": 65536 00:08:25.166 }, 00:08:25.166 { 00:08:25.166 "name": "BaseBdev4", 00:08:25.166 "uuid": "3b5e0ba7-9075-42e8-9329-497b91241354", 00:08:25.166 "is_configured": true, 00:08:25.166 "data_offset": 0, 00:08:25.166 "data_size": 65536 00:08:25.166 } 00:08:25.166 ] 00:08:25.166 } 00:08:25.166 } 00:08:25.166 }' 00:08:25.166 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:25.426 BaseBdev2 00:08:25.426 BaseBdev3 00:08:25.426 BaseBdev4' 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.426 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.427 [2024-11-26 19:49:16.285762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.427 [2024-11-26 19:49:16.285881] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.427 [2024-11-26 19:49:16.286004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.427 [2024-11-26 19:49:16.286101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.427 [2024-11-26 19:49:16.286137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69471 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69471 ']' 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69471 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69471 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.427 killing process with pid 69471 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69471' 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69471 00:08:25.427 [2024-11-26 19:49:16.321383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.427 19:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69471 00:08:25.687 [2024-11-26 19:49:16.578304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.628 19:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:26.628 00:08:26.628 real 0m8.621s 00:08:26.628 user 0m13.679s 00:08:26.628 sys 0m1.410s 00:08:26.628 19:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.628 ************************************ 00:08:26.628 END TEST raid_state_function_test 00:08:26.628 ************************************ 00:08:26.628 19:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.628 19:49:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:08:26.628 19:49:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:26.628 19:49:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.628 19:49:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.628 ************************************ 00:08:26.628 START TEST raid_state_function_test_sb 00:08:26.628 ************************************ 00:08:26.628 19:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:08:26.628 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:26.628 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:26.628 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:26.628 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:26.628 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:26.628 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.628 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:26.628 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:26.629 Process raid pid: 70109 00:08:26.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70109 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70109' 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70109 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70109 ']' 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.629 19:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:26.629 [2024-11-26 19:49:17.470619] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:26.629 [2024-11-26 19:49:17.470748] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.888 [2024-11-26 19:49:17.629769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.888 [2024-11-26 19:49:17.749624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.147 [2024-11-26 19:49:17.900153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.147 [2024-11-26 19:49:17.900188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.407 [2024-11-26 19:49:18.333823] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.407 [2024-11-26 19:49:18.333886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.407 [2024-11-26 19:49:18.333897] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.407 [2024-11-26 19:49:18.333907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.407 [2024-11-26 19:49:18.333913] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:27.407 [2024-11-26 19:49:18.333922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:27.407 [2024-11-26 19:49:18.333928] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:27.407 [2024-11-26 19:49:18.333937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.407 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.666 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.666 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.666 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.666 "name": "Existed_Raid", 00:08:27.666 "uuid": "273f405d-8b4f-400f-b647-34d87a7e1237", 00:08:27.666 "strip_size_kb": 64, 00:08:27.666 "state": "configuring", 00:08:27.666 "raid_level": "concat", 00:08:27.666 "superblock": true, 00:08:27.666 "num_base_bdevs": 4, 00:08:27.666 "num_base_bdevs_discovered": 0, 00:08:27.666 "num_base_bdevs_operational": 4, 00:08:27.666 "base_bdevs_list": [ 00:08:27.666 { 00:08:27.666 "name": "BaseBdev1", 00:08:27.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.666 "is_configured": false, 00:08:27.666 "data_offset": 0, 00:08:27.666 "data_size": 0 00:08:27.666 }, 00:08:27.666 { 00:08:27.666 "name": "BaseBdev2", 00:08:27.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.666 "is_configured": false, 00:08:27.666 "data_offset": 0, 00:08:27.666 "data_size": 0 00:08:27.666 }, 00:08:27.666 { 00:08:27.666 "name": "BaseBdev3", 00:08:27.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.666 "is_configured": false, 00:08:27.666 "data_offset": 0, 00:08:27.666 "data_size": 0 00:08:27.666 }, 00:08:27.666 { 00:08:27.666 "name": "BaseBdev4", 00:08:27.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.666 "is_configured": false, 00:08:27.666 "data_offset": 0, 00:08:27.666 "data_size": 0 00:08:27.666 } 00:08:27.666 ] 00:08:27.666 }' 00:08:27.666 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.666 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.925 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:27.925 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.925 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.925 [2024-11-26 19:49:18.713827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.925 [2024-11-26 19:49:18.713872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:27.925 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.925 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:27.925 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.925 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.925 [2024-11-26 19:49:18.721823] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.925 [2024-11-26 19:49:18.721868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.925 [2024-11-26 19:49:18.721878] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.925 [2024-11-26 19:49:18.721887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.925 [2024-11-26 19:49:18.721893] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:27.925 [2024-11-26 19:49:18.721902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:27.925 [2024-11-26 19:49:18.721909] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:27.925 [2024-11-26 19:49:18.721917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:27.925 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.925 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:27.925 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.925 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.925 [2024-11-26 19:49:18.756908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.925 BaseBdev1 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.926 [ 00:08:27.926 { 00:08:27.926 "name": "BaseBdev1", 00:08:27.926 "aliases": [ 00:08:27.926 "3f6c38ee-6da8-467f-9b85-5ea8f5e645f2" 00:08:27.926 ], 00:08:27.926 "product_name": "Malloc disk", 00:08:27.926 "block_size": 512, 00:08:27.926 "num_blocks": 65536, 00:08:27.926 "uuid": "3f6c38ee-6da8-467f-9b85-5ea8f5e645f2", 00:08:27.926 "assigned_rate_limits": { 00:08:27.926 "rw_ios_per_sec": 0, 00:08:27.926 "rw_mbytes_per_sec": 0, 00:08:27.926 "r_mbytes_per_sec": 0, 00:08:27.926 "w_mbytes_per_sec": 0 00:08:27.926 }, 00:08:27.926 "claimed": true, 00:08:27.926 "claim_type": "exclusive_write", 00:08:27.926 "zoned": false, 00:08:27.926 "supported_io_types": { 00:08:27.926 "read": true, 00:08:27.926 "write": true, 00:08:27.926 "unmap": true, 00:08:27.926 "flush": true, 00:08:27.926 "reset": true, 00:08:27.926 "nvme_admin": false, 00:08:27.926 "nvme_io": false, 00:08:27.926 "nvme_io_md": false, 00:08:27.926 "write_zeroes": true, 00:08:27.926 "zcopy": true, 00:08:27.926 "get_zone_info": false, 00:08:27.926 "zone_management": false, 00:08:27.926 "zone_append": false, 00:08:27.926 "compare": false, 00:08:27.926 "compare_and_write": false, 00:08:27.926 "abort": true, 00:08:27.926 "seek_hole": false, 00:08:27.926 "seek_data": false, 00:08:27.926 "copy": true, 00:08:27.926 "nvme_iov_md": false 00:08:27.926 }, 00:08:27.926 "memory_domains": [ 00:08:27.926 { 00:08:27.926 "dma_device_id": "system", 00:08:27.926 "dma_device_type": 1 00:08:27.926 }, 00:08:27.926 { 00:08:27.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.926 "dma_device_type": 2 00:08:27.926 } 00:08:27.926 ], 00:08:27.926 "driver_specific": {} 00:08:27.926 } 00:08:27.926 ] 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.926 "name": "Existed_Raid", 00:08:27.926 "uuid": "3e4e27d9-e097-48c8-868a-7e10074d17fc", 00:08:27.926 "strip_size_kb": 64, 00:08:27.926 "state": "configuring", 00:08:27.926 "raid_level": "concat", 00:08:27.926 "superblock": true, 00:08:27.926 "num_base_bdevs": 4, 00:08:27.926 "num_base_bdevs_discovered": 1, 00:08:27.926 "num_base_bdevs_operational": 4, 00:08:27.926 "base_bdevs_list": [ 00:08:27.926 { 00:08:27.926 "name": "BaseBdev1", 00:08:27.926 "uuid": "3f6c38ee-6da8-467f-9b85-5ea8f5e645f2", 00:08:27.926 "is_configured": true, 00:08:27.926 "data_offset": 2048, 00:08:27.926 "data_size": 63488 00:08:27.926 }, 00:08:27.926 { 00:08:27.926 "name": "BaseBdev2", 00:08:27.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.926 "is_configured": false, 00:08:27.926 "data_offset": 0, 00:08:27.926 "data_size": 0 00:08:27.926 }, 00:08:27.926 { 00:08:27.926 "name": "BaseBdev3", 00:08:27.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.926 "is_configured": false, 00:08:27.926 "data_offset": 0, 00:08:27.926 "data_size": 0 00:08:27.926 }, 00:08:27.926 { 00:08:27.926 "name": "BaseBdev4", 00:08:27.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.926 "is_configured": false, 00:08:27.926 "data_offset": 0, 00:08:27.926 "data_size": 0 00:08:27.926 } 00:08:27.926 ] 00:08:27.926 }' 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.926 19:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.185 [2024-11-26 19:49:19.105043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.185 [2024-11-26 19:49:19.105248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.185 [2024-11-26 19:49:19.113104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.185 [2024-11-26 19:49:19.115159] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.185 [2024-11-26 19:49:19.115284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.185 [2024-11-26 19:49:19.115352] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:28.185 [2024-11-26 19:49:19.115384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:28.185 [2024-11-26 19:49:19.115402] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:28.185 [2024-11-26 19:49:19.115423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.185 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.444 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.444 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.444 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.444 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.444 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.444 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.444 "name": "Existed_Raid", 00:08:28.444 "uuid": "9b449e81-e41e-456f-8508-9cf53dd5e03e", 00:08:28.444 "strip_size_kb": 64, 00:08:28.444 "state": "configuring", 00:08:28.444 "raid_level": "concat", 00:08:28.444 "superblock": true, 00:08:28.444 "num_base_bdevs": 4, 00:08:28.444 "num_base_bdevs_discovered": 1, 00:08:28.444 "num_base_bdevs_operational": 4, 00:08:28.444 "base_bdevs_list": [ 00:08:28.444 { 00:08:28.444 "name": "BaseBdev1", 00:08:28.444 "uuid": "3f6c38ee-6da8-467f-9b85-5ea8f5e645f2", 00:08:28.444 "is_configured": true, 00:08:28.444 "data_offset": 2048, 00:08:28.444 "data_size": 63488 00:08:28.444 }, 00:08:28.444 { 00:08:28.444 "name": "BaseBdev2", 00:08:28.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.445 "is_configured": false, 00:08:28.445 "data_offset": 0, 00:08:28.445 "data_size": 0 00:08:28.445 }, 00:08:28.445 { 00:08:28.445 "name": "BaseBdev3", 00:08:28.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.445 "is_configured": false, 00:08:28.445 "data_offset": 0, 00:08:28.445 "data_size": 0 00:08:28.445 }, 00:08:28.445 { 00:08:28.445 "name": "BaseBdev4", 00:08:28.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.445 "is_configured": false, 00:08:28.445 "data_offset": 0, 00:08:28.445 "data_size": 0 00:08:28.445 } 00:08:28.445 ] 00:08:28.445 }' 00:08:28.445 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.445 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.708 [2024-11-26 19:49:19.457969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.708 BaseBdev2 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.708 [ 00:08:28.708 { 00:08:28.708 "name": "BaseBdev2", 00:08:28.708 "aliases": [ 00:08:28.708 "a8f19cee-38f4-4b66-a95f-506482968ecd" 00:08:28.708 ], 00:08:28.708 "product_name": "Malloc disk", 00:08:28.708 "block_size": 512, 00:08:28.708 "num_blocks": 65536, 00:08:28.708 "uuid": "a8f19cee-38f4-4b66-a95f-506482968ecd", 00:08:28.708 "assigned_rate_limits": { 00:08:28.708 "rw_ios_per_sec": 0, 00:08:28.708 "rw_mbytes_per_sec": 0, 00:08:28.708 "r_mbytes_per_sec": 0, 00:08:28.708 "w_mbytes_per_sec": 0 00:08:28.708 }, 00:08:28.708 "claimed": true, 00:08:28.708 "claim_type": "exclusive_write", 00:08:28.708 "zoned": false, 00:08:28.708 "supported_io_types": { 00:08:28.708 "read": true, 00:08:28.708 "write": true, 00:08:28.708 "unmap": true, 00:08:28.708 "flush": true, 00:08:28.708 "reset": true, 00:08:28.708 "nvme_admin": false, 00:08:28.708 "nvme_io": false, 00:08:28.708 "nvme_io_md": false, 00:08:28.708 "write_zeroes": true, 00:08:28.708 "zcopy": true, 00:08:28.708 "get_zone_info": false, 00:08:28.708 "zone_management": false, 00:08:28.708 "zone_append": false, 00:08:28.708 "compare": false, 00:08:28.708 "compare_and_write": false, 00:08:28.708 "abort": true, 00:08:28.708 "seek_hole": false, 00:08:28.708 "seek_data": false, 00:08:28.708 "copy": true, 00:08:28.708 "nvme_iov_md": false 00:08:28.708 }, 00:08:28.708 "memory_domains": [ 00:08:28.708 { 00:08:28.708 "dma_device_id": "system", 00:08:28.708 "dma_device_type": 1 00:08:28.708 }, 00:08:28.708 { 00:08:28.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.708 "dma_device_type": 2 00:08:28.708 } 00:08:28.708 ], 00:08:28.708 "driver_specific": {} 00:08:28.708 } 00:08:28.708 ] 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:28.708 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.709 "name": "Existed_Raid", 00:08:28.709 "uuid": "9b449e81-e41e-456f-8508-9cf53dd5e03e", 00:08:28.709 "strip_size_kb": 64, 00:08:28.709 "state": "configuring", 00:08:28.709 "raid_level": "concat", 00:08:28.709 "superblock": true, 00:08:28.709 "num_base_bdevs": 4, 00:08:28.709 "num_base_bdevs_discovered": 2, 00:08:28.709 "num_base_bdevs_operational": 4, 00:08:28.709 "base_bdevs_list": [ 00:08:28.709 { 00:08:28.709 "name": "BaseBdev1", 00:08:28.709 "uuid": "3f6c38ee-6da8-467f-9b85-5ea8f5e645f2", 00:08:28.709 "is_configured": true, 00:08:28.709 "data_offset": 2048, 00:08:28.709 "data_size": 63488 00:08:28.709 }, 00:08:28.709 { 00:08:28.709 "name": "BaseBdev2", 00:08:28.709 "uuid": "a8f19cee-38f4-4b66-a95f-506482968ecd", 00:08:28.709 "is_configured": true, 00:08:28.709 "data_offset": 2048, 00:08:28.709 "data_size": 63488 00:08:28.709 }, 00:08:28.709 { 00:08:28.709 "name": "BaseBdev3", 00:08:28.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.709 "is_configured": false, 00:08:28.709 "data_offset": 0, 00:08:28.709 "data_size": 0 00:08:28.709 }, 00:08:28.709 { 00:08:28.709 "name": "BaseBdev4", 00:08:28.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.709 "is_configured": false, 00:08:28.709 "data_offset": 0, 00:08:28.709 "data_size": 0 00:08:28.709 } 00:08:28.709 ] 00:08:28.709 }' 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.709 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.971 [2024-11-26 19:49:19.855454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.971 BaseBdev3 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.971 [ 00:08:28.971 { 00:08:28.971 "name": "BaseBdev3", 00:08:28.971 "aliases": [ 00:08:28.971 "24ec7018-c127-4dad-8c2f-f45f236b9e80" 00:08:28.971 ], 00:08:28.971 "product_name": "Malloc disk", 00:08:28.971 "block_size": 512, 00:08:28.971 "num_blocks": 65536, 00:08:28.971 "uuid": "24ec7018-c127-4dad-8c2f-f45f236b9e80", 00:08:28.971 "assigned_rate_limits": { 00:08:28.971 "rw_ios_per_sec": 0, 00:08:28.971 "rw_mbytes_per_sec": 0, 00:08:28.971 "r_mbytes_per_sec": 0, 00:08:28.971 "w_mbytes_per_sec": 0 00:08:28.971 }, 00:08:28.971 "claimed": true, 00:08:28.971 "claim_type": "exclusive_write", 00:08:28.971 "zoned": false, 00:08:28.971 "supported_io_types": { 00:08:28.971 "read": true, 00:08:28.971 "write": true, 00:08:28.971 "unmap": true, 00:08:28.971 "flush": true, 00:08:28.971 "reset": true, 00:08:28.971 "nvme_admin": false, 00:08:28.971 "nvme_io": false, 00:08:28.971 "nvme_io_md": false, 00:08:28.971 "write_zeroes": true, 00:08:28.971 "zcopy": true, 00:08:28.971 "get_zone_info": false, 00:08:28.971 "zone_management": false, 00:08:28.971 "zone_append": false, 00:08:28.971 "compare": false, 00:08:28.971 "compare_and_write": false, 00:08:28.971 "abort": true, 00:08:28.971 "seek_hole": false, 00:08:28.971 "seek_data": false, 00:08:28.971 "copy": true, 00:08:28.971 "nvme_iov_md": false 00:08:28.971 }, 00:08:28.971 "memory_domains": [ 00:08:28.971 { 00:08:28.971 "dma_device_id": "system", 00:08:28.971 "dma_device_type": 1 00:08:28.971 }, 00:08:28.971 { 00:08:28.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.971 "dma_device_type": 2 00:08:28.971 } 00:08:28.971 ], 00:08:28.971 "driver_specific": {} 00:08:28.971 } 00:08:28.971 ] 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.971 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.232 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.232 "name": "Existed_Raid", 00:08:29.232 "uuid": "9b449e81-e41e-456f-8508-9cf53dd5e03e", 00:08:29.232 "strip_size_kb": 64, 00:08:29.232 "state": "configuring", 00:08:29.232 "raid_level": "concat", 00:08:29.232 "superblock": true, 00:08:29.232 "num_base_bdevs": 4, 00:08:29.232 "num_base_bdevs_discovered": 3, 00:08:29.232 "num_base_bdevs_operational": 4, 00:08:29.232 "base_bdevs_list": [ 00:08:29.232 { 00:08:29.232 "name": "BaseBdev1", 00:08:29.232 "uuid": "3f6c38ee-6da8-467f-9b85-5ea8f5e645f2", 00:08:29.232 "is_configured": true, 00:08:29.232 "data_offset": 2048, 00:08:29.232 "data_size": 63488 00:08:29.232 }, 00:08:29.232 { 00:08:29.232 "name": "BaseBdev2", 00:08:29.232 "uuid": "a8f19cee-38f4-4b66-a95f-506482968ecd", 00:08:29.232 "is_configured": true, 00:08:29.232 "data_offset": 2048, 00:08:29.232 "data_size": 63488 00:08:29.232 }, 00:08:29.232 { 00:08:29.232 "name": "BaseBdev3", 00:08:29.232 "uuid": "24ec7018-c127-4dad-8c2f-f45f236b9e80", 00:08:29.232 "is_configured": true, 00:08:29.232 "data_offset": 2048, 00:08:29.232 "data_size": 63488 00:08:29.232 }, 00:08:29.232 { 00:08:29.232 "name": "BaseBdev4", 00:08:29.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.233 "is_configured": false, 00:08:29.233 "data_offset": 0, 00:08:29.233 "data_size": 0 00:08:29.233 } 00:08:29.233 ] 00:08:29.233 }' 00:08:29.233 19:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.233 19:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.492 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:29.492 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.492 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.492 [2024-11-26 19:49:20.216456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:29.492 [2024-11-26 19:49:20.216912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:29.492 BaseBdev4 00:08:29.492 [2024-11-26 19:49:20.217007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:29.492 [2024-11-26 19:49:20.217326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:29.492 [2024-11-26 19:49:20.217497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:29.493 [2024-11-26 19:49:20.217592] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:29.493 [2024-11-26 19:49:20.217773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.493 [ 00:08:29.493 { 00:08:29.493 "name": "BaseBdev4", 00:08:29.493 "aliases": [ 00:08:29.493 "66024f76-d120-4b99-95f3-5a151276f674" 00:08:29.493 ], 00:08:29.493 "product_name": "Malloc disk", 00:08:29.493 "block_size": 512, 00:08:29.493 "num_blocks": 65536, 00:08:29.493 "uuid": "66024f76-d120-4b99-95f3-5a151276f674", 00:08:29.493 "assigned_rate_limits": { 00:08:29.493 "rw_ios_per_sec": 0, 00:08:29.493 "rw_mbytes_per_sec": 0, 00:08:29.493 "r_mbytes_per_sec": 0, 00:08:29.493 "w_mbytes_per_sec": 0 00:08:29.493 }, 00:08:29.493 "claimed": true, 00:08:29.493 "claim_type": "exclusive_write", 00:08:29.493 "zoned": false, 00:08:29.493 "supported_io_types": { 00:08:29.493 "read": true, 00:08:29.493 "write": true, 00:08:29.493 "unmap": true, 00:08:29.493 "flush": true, 00:08:29.493 "reset": true, 00:08:29.493 "nvme_admin": false, 00:08:29.493 "nvme_io": false, 00:08:29.493 "nvme_io_md": false, 00:08:29.493 "write_zeroes": true, 00:08:29.493 "zcopy": true, 00:08:29.493 "get_zone_info": false, 00:08:29.493 "zone_management": false, 00:08:29.493 "zone_append": false, 00:08:29.493 "compare": false, 00:08:29.493 "compare_and_write": false, 00:08:29.493 "abort": true, 00:08:29.493 "seek_hole": false, 00:08:29.493 "seek_data": false, 00:08:29.493 "copy": true, 00:08:29.493 "nvme_iov_md": false 00:08:29.493 }, 00:08:29.493 "memory_domains": [ 00:08:29.493 { 00:08:29.493 "dma_device_id": "system", 00:08:29.493 "dma_device_type": 1 00:08:29.493 }, 00:08:29.493 { 00:08:29.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.493 "dma_device_type": 2 00:08:29.493 } 00:08:29.493 ], 00:08:29.493 "driver_specific": {} 00:08:29.493 } 00:08:29.493 ] 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.493 "name": "Existed_Raid", 00:08:29.493 "uuid": "9b449e81-e41e-456f-8508-9cf53dd5e03e", 00:08:29.493 "strip_size_kb": 64, 00:08:29.493 "state": "online", 00:08:29.493 "raid_level": "concat", 00:08:29.493 "superblock": true, 00:08:29.493 "num_base_bdevs": 4, 00:08:29.493 "num_base_bdevs_discovered": 4, 00:08:29.493 "num_base_bdevs_operational": 4, 00:08:29.493 "base_bdevs_list": [ 00:08:29.493 { 00:08:29.493 "name": "BaseBdev1", 00:08:29.493 "uuid": "3f6c38ee-6da8-467f-9b85-5ea8f5e645f2", 00:08:29.493 "is_configured": true, 00:08:29.493 "data_offset": 2048, 00:08:29.493 "data_size": 63488 00:08:29.493 }, 00:08:29.493 { 00:08:29.493 "name": "BaseBdev2", 00:08:29.493 "uuid": "a8f19cee-38f4-4b66-a95f-506482968ecd", 00:08:29.493 "is_configured": true, 00:08:29.493 "data_offset": 2048, 00:08:29.493 "data_size": 63488 00:08:29.493 }, 00:08:29.493 { 00:08:29.493 "name": "BaseBdev3", 00:08:29.493 "uuid": "24ec7018-c127-4dad-8c2f-f45f236b9e80", 00:08:29.493 "is_configured": true, 00:08:29.493 "data_offset": 2048, 00:08:29.493 "data_size": 63488 00:08:29.493 }, 00:08:29.493 { 00:08:29.493 "name": "BaseBdev4", 00:08:29.493 "uuid": "66024f76-d120-4b99-95f3-5a151276f674", 00:08:29.493 "is_configured": true, 00:08:29.493 "data_offset": 2048, 00:08:29.493 "data_size": 63488 00:08:29.493 } 00:08:29.493 ] 00:08:29.493 }' 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.493 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.754 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:29.754 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:29.754 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.754 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.754 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.754 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.754 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:29.754 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.754 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.754 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.754 [2024-11-26 19:49:20.576985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.754 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.754 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.754 "name": "Existed_Raid", 00:08:29.754 "aliases": [ 00:08:29.754 "9b449e81-e41e-456f-8508-9cf53dd5e03e" 00:08:29.754 ], 00:08:29.754 "product_name": "Raid Volume", 00:08:29.754 "block_size": 512, 00:08:29.754 "num_blocks": 253952, 00:08:29.754 "uuid": "9b449e81-e41e-456f-8508-9cf53dd5e03e", 00:08:29.754 "assigned_rate_limits": { 00:08:29.754 "rw_ios_per_sec": 0, 00:08:29.754 "rw_mbytes_per_sec": 0, 00:08:29.754 "r_mbytes_per_sec": 0, 00:08:29.754 "w_mbytes_per_sec": 0 00:08:29.754 }, 00:08:29.754 "claimed": false, 00:08:29.754 "zoned": false, 00:08:29.754 "supported_io_types": { 00:08:29.754 "read": true, 00:08:29.754 "write": true, 00:08:29.754 "unmap": true, 00:08:29.754 "flush": true, 00:08:29.754 "reset": true, 00:08:29.754 "nvme_admin": false, 00:08:29.754 "nvme_io": false, 00:08:29.754 "nvme_io_md": false, 00:08:29.754 "write_zeroes": true, 00:08:29.754 "zcopy": false, 00:08:29.754 "get_zone_info": false, 00:08:29.754 "zone_management": false, 00:08:29.754 "zone_append": false, 00:08:29.754 "compare": false, 00:08:29.754 "compare_and_write": false, 00:08:29.754 "abort": false, 00:08:29.754 "seek_hole": false, 00:08:29.754 "seek_data": false, 00:08:29.754 "copy": false, 00:08:29.754 "nvme_iov_md": false 00:08:29.754 }, 00:08:29.754 "memory_domains": [ 00:08:29.754 { 00:08:29.754 "dma_device_id": "system", 00:08:29.754 "dma_device_type": 1 00:08:29.754 }, 00:08:29.754 { 00:08:29.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.754 "dma_device_type": 2 00:08:29.754 }, 00:08:29.754 { 00:08:29.754 "dma_device_id": "system", 00:08:29.754 "dma_device_type": 1 00:08:29.754 }, 00:08:29.754 { 00:08:29.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.754 "dma_device_type": 2 00:08:29.754 }, 00:08:29.754 { 00:08:29.754 "dma_device_id": "system", 00:08:29.754 "dma_device_type": 1 00:08:29.754 }, 00:08:29.754 { 00:08:29.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.754 "dma_device_type": 2 00:08:29.754 }, 00:08:29.754 { 00:08:29.754 "dma_device_id": "system", 00:08:29.754 "dma_device_type": 1 00:08:29.754 }, 00:08:29.754 { 00:08:29.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.754 "dma_device_type": 2 00:08:29.754 } 00:08:29.754 ], 00:08:29.754 "driver_specific": { 00:08:29.754 "raid": { 00:08:29.754 "uuid": "9b449e81-e41e-456f-8508-9cf53dd5e03e", 00:08:29.754 "strip_size_kb": 64, 00:08:29.754 "state": "online", 00:08:29.754 "raid_level": "concat", 00:08:29.754 "superblock": true, 00:08:29.754 "num_base_bdevs": 4, 00:08:29.754 "num_base_bdevs_discovered": 4, 00:08:29.754 "num_base_bdevs_operational": 4, 00:08:29.754 "base_bdevs_list": [ 00:08:29.754 { 00:08:29.754 "name": "BaseBdev1", 00:08:29.754 "uuid": "3f6c38ee-6da8-467f-9b85-5ea8f5e645f2", 00:08:29.754 "is_configured": true, 00:08:29.754 "data_offset": 2048, 00:08:29.754 "data_size": 63488 00:08:29.754 }, 00:08:29.754 { 00:08:29.754 "name": "BaseBdev2", 00:08:29.754 "uuid": "a8f19cee-38f4-4b66-a95f-506482968ecd", 00:08:29.755 "is_configured": true, 00:08:29.755 "data_offset": 2048, 00:08:29.755 "data_size": 63488 00:08:29.755 }, 00:08:29.755 { 00:08:29.755 "name": "BaseBdev3", 00:08:29.755 "uuid": "24ec7018-c127-4dad-8c2f-f45f236b9e80", 00:08:29.755 "is_configured": true, 00:08:29.755 "data_offset": 2048, 00:08:29.755 "data_size": 63488 00:08:29.755 }, 00:08:29.755 { 00:08:29.755 "name": "BaseBdev4", 00:08:29.755 "uuid": "66024f76-d120-4b99-95f3-5a151276f674", 00:08:29.755 "is_configured": true, 00:08:29.755 "data_offset": 2048, 00:08:29.755 "data_size": 63488 00:08:29.755 } 00:08:29.755 ] 00:08:29.755 } 00:08:29.755 } 00:08:29.755 }' 00:08:29.755 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.755 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:29.755 BaseBdev2 00:08:29.755 BaseBdev3 00:08:29.755 BaseBdev4' 00:08:29.755 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.755 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.755 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.755 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:29.755 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.755 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.755 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.755 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.015 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.016 [2024-11-26 19:49:20.796710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:30.016 [2024-11-26 19:49:20.796743] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.016 [2024-11-26 19:49:20.796798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.016 "name": "Existed_Raid", 00:08:30.016 "uuid": "9b449e81-e41e-456f-8508-9cf53dd5e03e", 00:08:30.016 "strip_size_kb": 64, 00:08:30.016 "state": "offline", 00:08:30.016 "raid_level": "concat", 00:08:30.016 "superblock": true, 00:08:30.016 "num_base_bdevs": 4, 00:08:30.016 "num_base_bdevs_discovered": 3, 00:08:30.016 "num_base_bdevs_operational": 3, 00:08:30.016 "base_bdevs_list": [ 00:08:30.016 { 00:08:30.016 "name": null, 00:08:30.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.016 "is_configured": false, 00:08:30.016 "data_offset": 0, 00:08:30.016 "data_size": 63488 00:08:30.016 }, 00:08:30.016 { 00:08:30.016 "name": "BaseBdev2", 00:08:30.016 "uuid": "a8f19cee-38f4-4b66-a95f-506482968ecd", 00:08:30.016 "is_configured": true, 00:08:30.016 "data_offset": 2048, 00:08:30.016 "data_size": 63488 00:08:30.016 }, 00:08:30.016 { 00:08:30.016 "name": "BaseBdev3", 00:08:30.016 "uuid": "24ec7018-c127-4dad-8c2f-f45f236b9e80", 00:08:30.016 "is_configured": true, 00:08:30.016 "data_offset": 2048, 00:08:30.016 "data_size": 63488 00:08:30.016 }, 00:08:30.016 { 00:08:30.016 "name": "BaseBdev4", 00:08:30.016 "uuid": "66024f76-d120-4b99-95f3-5a151276f674", 00:08:30.016 "is_configured": true, 00:08:30.016 "data_offset": 2048, 00:08:30.016 "data_size": 63488 00:08:30.016 } 00:08:30.016 ] 00:08:30.016 }' 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.016 19:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.301 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:30.301 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.301 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:30.301 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.301 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.301 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.301 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.301 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:30.301 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:30.301 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:30.301 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.301 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.301 [2024-11-26 19:49:21.212599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.559 [2024-11-26 19:49:21.308003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.559 [2024-11-26 19:49:21.410792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:30.559 [2024-11-26 19:49:21.410954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:30.559 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.822 BaseBdev2 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.822 [ 00:08:30.822 { 00:08:30.822 "name": "BaseBdev2", 00:08:30.822 "aliases": [ 00:08:30.822 "0ccb4f58-4ae2-4c10-b1a8-9dbea69f2a93" 00:08:30.822 ], 00:08:30.822 "product_name": "Malloc disk", 00:08:30.822 "block_size": 512, 00:08:30.822 "num_blocks": 65536, 00:08:30.822 "uuid": "0ccb4f58-4ae2-4c10-b1a8-9dbea69f2a93", 00:08:30.822 "assigned_rate_limits": { 00:08:30.822 "rw_ios_per_sec": 0, 00:08:30.822 "rw_mbytes_per_sec": 0, 00:08:30.822 "r_mbytes_per_sec": 0, 00:08:30.822 "w_mbytes_per_sec": 0 00:08:30.822 }, 00:08:30.822 "claimed": false, 00:08:30.822 "zoned": false, 00:08:30.822 "supported_io_types": { 00:08:30.822 "read": true, 00:08:30.822 "write": true, 00:08:30.822 "unmap": true, 00:08:30.822 "flush": true, 00:08:30.822 "reset": true, 00:08:30.822 "nvme_admin": false, 00:08:30.822 "nvme_io": false, 00:08:30.822 "nvme_io_md": false, 00:08:30.822 "write_zeroes": true, 00:08:30.822 "zcopy": true, 00:08:30.822 "get_zone_info": false, 00:08:30.822 "zone_management": false, 00:08:30.822 "zone_append": false, 00:08:30.822 "compare": false, 00:08:30.822 "compare_and_write": false, 00:08:30.822 "abort": true, 00:08:30.822 "seek_hole": false, 00:08:30.822 "seek_data": false, 00:08:30.822 "copy": true, 00:08:30.822 "nvme_iov_md": false 00:08:30.822 }, 00:08:30.822 "memory_domains": [ 00:08:30.822 { 00:08:30.822 "dma_device_id": "system", 00:08:30.822 "dma_device_type": 1 00:08:30.822 }, 00:08:30.822 { 00:08:30.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.822 "dma_device_type": 2 00:08:30.822 } 00:08:30.822 ], 00:08:30.822 "driver_specific": {} 00:08:30.822 } 00:08:30.822 ] 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:30.822 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.823 BaseBdev3 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.823 [ 00:08:30.823 { 00:08:30.823 "name": "BaseBdev3", 00:08:30.823 "aliases": [ 00:08:30.823 "34075943-cb70-41b3-b689-278605b8f1e6" 00:08:30.823 ], 00:08:30.823 "product_name": "Malloc disk", 00:08:30.823 "block_size": 512, 00:08:30.823 "num_blocks": 65536, 00:08:30.823 "uuid": "34075943-cb70-41b3-b689-278605b8f1e6", 00:08:30.823 "assigned_rate_limits": { 00:08:30.823 "rw_ios_per_sec": 0, 00:08:30.823 "rw_mbytes_per_sec": 0, 00:08:30.823 "r_mbytes_per_sec": 0, 00:08:30.823 "w_mbytes_per_sec": 0 00:08:30.823 }, 00:08:30.823 "claimed": false, 00:08:30.823 "zoned": false, 00:08:30.823 "supported_io_types": { 00:08:30.823 "read": true, 00:08:30.823 "write": true, 00:08:30.823 "unmap": true, 00:08:30.823 "flush": true, 00:08:30.823 "reset": true, 00:08:30.823 "nvme_admin": false, 00:08:30.823 "nvme_io": false, 00:08:30.823 "nvme_io_md": false, 00:08:30.823 "write_zeroes": true, 00:08:30.823 "zcopy": true, 00:08:30.823 "get_zone_info": false, 00:08:30.823 "zone_management": false, 00:08:30.823 "zone_append": false, 00:08:30.823 "compare": false, 00:08:30.823 "compare_and_write": false, 00:08:30.823 "abort": true, 00:08:30.823 "seek_hole": false, 00:08:30.823 "seek_data": false, 00:08:30.823 "copy": true, 00:08:30.823 "nvme_iov_md": false 00:08:30.823 }, 00:08:30.823 "memory_domains": [ 00:08:30.823 { 00:08:30.823 "dma_device_id": "system", 00:08:30.823 "dma_device_type": 1 00:08:30.823 }, 00:08:30.823 { 00:08:30.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.823 "dma_device_type": 2 00:08:30.823 } 00:08:30.823 ], 00:08:30.823 "driver_specific": {} 00:08:30.823 } 00:08:30.823 ] 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.823 BaseBdev4 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.823 [ 00:08:30.823 { 00:08:30.823 "name": "BaseBdev4", 00:08:30.823 "aliases": [ 00:08:30.823 "b035fd71-3232-44d5-ae12-20961c6e790a" 00:08:30.823 ], 00:08:30.823 "product_name": "Malloc disk", 00:08:30.823 "block_size": 512, 00:08:30.823 "num_blocks": 65536, 00:08:30.823 "uuid": "b035fd71-3232-44d5-ae12-20961c6e790a", 00:08:30.823 "assigned_rate_limits": { 00:08:30.823 "rw_ios_per_sec": 0, 00:08:30.823 "rw_mbytes_per_sec": 0, 00:08:30.823 "r_mbytes_per_sec": 0, 00:08:30.823 "w_mbytes_per_sec": 0 00:08:30.823 }, 00:08:30.823 "claimed": false, 00:08:30.823 "zoned": false, 00:08:30.823 "supported_io_types": { 00:08:30.823 "read": true, 00:08:30.823 "write": true, 00:08:30.823 "unmap": true, 00:08:30.823 "flush": true, 00:08:30.823 "reset": true, 00:08:30.823 "nvme_admin": false, 00:08:30.823 "nvme_io": false, 00:08:30.823 "nvme_io_md": false, 00:08:30.823 "write_zeroes": true, 00:08:30.823 "zcopy": true, 00:08:30.823 "get_zone_info": false, 00:08:30.823 "zone_management": false, 00:08:30.823 "zone_append": false, 00:08:30.823 "compare": false, 00:08:30.823 "compare_and_write": false, 00:08:30.823 "abort": true, 00:08:30.823 "seek_hole": false, 00:08:30.823 "seek_data": false, 00:08:30.823 "copy": true, 00:08:30.823 "nvme_iov_md": false 00:08:30.823 }, 00:08:30.823 "memory_domains": [ 00:08:30.823 { 00:08:30.823 "dma_device_id": "system", 00:08:30.823 "dma_device_type": 1 00:08:30.823 }, 00:08:30.823 { 00:08:30.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.823 "dma_device_type": 2 00:08:30.823 } 00:08:30.823 ], 00:08:30.823 "driver_specific": {} 00:08:30.823 } 00:08:30.823 ] 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.823 [2024-11-26 19:49:21.680847] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.823 [2024-11-26 19:49:21.681027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.823 [2024-11-26 19:49:21.681100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.823 [2024-11-26 19:49:21.683107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:30.823 [2024-11-26 19:49:21.683248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.823 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.824 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.824 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.824 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.824 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.824 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.824 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.824 "name": "Existed_Raid", 00:08:30.824 "uuid": "b960dc41-85f5-4303-a34e-633c93465b3d", 00:08:30.824 "strip_size_kb": 64, 00:08:30.824 "state": "configuring", 00:08:30.824 "raid_level": "concat", 00:08:30.824 "superblock": true, 00:08:30.824 "num_base_bdevs": 4, 00:08:30.824 "num_base_bdevs_discovered": 3, 00:08:30.824 "num_base_bdevs_operational": 4, 00:08:30.824 "base_bdevs_list": [ 00:08:30.824 { 00:08:30.824 "name": "BaseBdev1", 00:08:30.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.824 "is_configured": false, 00:08:30.824 "data_offset": 0, 00:08:30.824 "data_size": 0 00:08:30.824 }, 00:08:30.824 { 00:08:30.824 "name": "BaseBdev2", 00:08:30.824 "uuid": "0ccb4f58-4ae2-4c10-b1a8-9dbea69f2a93", 00:08:30.824 "is_configured": true, 00:08:30.824 "data_offset": 2048, 00:08:30.824 "data_size": 63488 00:08:30.824 }, 00:08:30.824 { 00:08:30.824 "name": "BaseBdev3", 00:08:30.824 "uuid": "34075943-cb70-41b3-b689-278605b8f1e6", 00:08:30.824 "is_configured": true, 00:08:30.824 "data_offset": 2048, 00:08:30.824 "data_size": 63488 00:08:30.824 }, 00:08:30.824 { 00:08:30.824 "name": "BaseBdev4", 00:08:30.824 "uuid": "b035fd71-3232-44d5-ae12-20961c6e790a", 00:08:30.824 "is_configured": true, 00:08:30.824 "data_offset": 2048, 00:08:30.824 "data_size": 63488 00:08:30.824 } 00:08:30.824 ] 00:08:30.824 }' 00:08:30.824 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.824 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.084 19:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:31.084 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.084 19:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.084 [2024-11-26 19:49:22.000933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.084 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.344 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.344 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.344 "name": "Existed_Raid", 00:08:31.344 "uuid": "b960dc41-85f5-4303-a34e-633c93465b3d", 00:08:31.344 "strip_size_kb": 64, 00:08:31.344 "state": "configuring", 00:08:31.344 "raid_level": "concat", 00:08:31.344 "superblock": true, 00:08:31.344 "num_base_bdevs": 4, 00:08:31.344 "num_base_bdevs_discovered": 2, 00:08:31.344 "num_base_bdevs_operational": 4, 00:08:31.344 "base_bdevs_list": [ 00:08:31.344 { 00:08:31.344 "name": "BaseBdev1", 00:08:31.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.344 "is_configured": false, 00:08:31.344 "data_offset": 0, 00:08:31.344 "data_size": 0 00:08:31.344 }, 00:08:31.344 { 00:08:31.344 "name": null, 00:08:31.344 "uuid": "0ccb4f58-4ae2-4c10-b1a8-9dbea69f2a93", 00:08:31.344 "is_configured": false, 00:08:31.344 "data_offset": 0, 00:08:31.344 "data_size": 63488 00:08:31.344 }, 00:08:31.344 { 00:08:31.344 "name": "BaseBdev3", 00:08:31.344 "uuid": "34075943-cb70-41b3-b689-278605b8f1e6", 00:08:31.344 "is_configured": true, 00:08:31.344 "data_offset": 2048, 00:08:31.344 "data_size": 63488 00:08:31.344 }, 00:08:31.344 { 00:08:31.344 "name": "BaseBdev4", 00:08:31.344 "uuid": "b035fd71-3232-44d5-ae12-20961c6e790a", 00:08:31.344 "is_configured": true, 00:08:31.344 "data_offset": 2048, 00:08:31.344 "data_size": 63488 00:08:31.344 } 00:08:31.344 ] 00:08:31.344 }' 00:08:31.344 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.344 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.605 [2024-11-26 19:49:22.401717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.605 BaseBdev1 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.605 [ 00:08:31.605 { 00:08:31.605 "name": "BaseBdev1", 00:08:31.605 "aliases": [ 00:08:31.605 "8b5971b4-aaa2-46eb-b0d8-73ce5fbdef0a" 00:08:31.605 ], 00:08:31.605 "product_name": "Malloc disk", 00:08:31.605 "block_size": 512, 00:08:31.605 "num_blocks": 65536, 00:08:31.605 "uuid": "8b5971b4-aaa2-46eb-b0d8-73ce5fbdef0a", 00:08:31.605 "assigned_rate_limits": { 00:08:31.605 "rw_ios_per_sec": 0, 00:08:31.605 "rw_mbytes_per_sec": 0, 00:08:31.605 "r_mbytes_per_sec": 0, 00:08:31.605 "w_mbytes_per_sec": 0 00:08:31.605 }, 00:08:31.605 "claimed": true, 00:08:31.605 "claim_type": "exclusive_write", 00:08:31.605 "zoned": false, 00:08:31.605 "supported_io_types": { 00:08:31.605 "read": true, 00:08:31.605 "write": true, 00:08:31.605 "unmap": true, 00:08:31.605 "flush": true, 00:08:31.605 "reset": true, 00:08:31.605 "nvme_admin": false, 00:08:31.605 "nvme_io": false, 00:08:31.605 "nvme_io_md": false, 00:08:31.605 "write_zeroes": true, 00:08:31.605 "zcopy": true, 00:08:31.605 "get_zone_info": false, 00:08:31.605 "zone_management": false, 00:08:31.605 "zone_append": false, 00:08:31.605 "compare": false, 00:08:31.605 "compare_and_write": false, 00:08:31.605 "abort": true, 00:08:31.605 "seek_hole": false, 00:08:31.605 "seek_data": false, 00:08:31.605 "copy": true, 00:08:31.605 "nvme_iov_md": false 00:08:31.605 }, 00:08:31.605 "memory_domains": [ 00:08:31.605 { 00:08:31.605 "dma_device_id": "system", 00:08:31.605 "dma_device_type": 1 00:08:31.605 }, 00:08:31.605 { 00:08:31.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.605 "dma_device_type": 2 00:08:31.605 } 00:08:31.605 ], 00:08:31.605 "driver_specific": {} 00:08:31.605 } 00:08:31.605 ] 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.605 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.606 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.606 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.606 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.606 "name": "Existed_Raid", 00:08:31.606 "uuid": "b960dc41-85f5-4303-a34e-633c93465b3d", 00:08:31.606 "strip_size_kb": 64, 00:08:31.606 "state": "configuring", 00:08:31.606 "raid_level": "concat", 00:08:31.606 "superblock": true, 00:08:31.606 "num_base_bdevs": 4, 00:08:31.606 "num_base_bdevs_discovered": 3, 00:08:31.606 "num_base_bdevs_operational": 4, 00:08:31.606 "base_bdevs_list": [ 00:08:31.606 { 00:08:31.606 "name": "BaseBdev1", 00:08:31.606 "uuid": "8b5971b4-aaa2-46eb-b0d8-73ce5fbdef0a", 00:08:31.606 "is_configured": true, 00:08:31.606 "data_offset": 2048, 00:08:31.606 "data_size": 63488 00:08:31.606 }, 00:08:31.606 { 00:08:31.606 "name": null, 00:08:31.606 "uuid": "0ccb4f58-4ae2-4c10-b1a8-9dbea69f2a93", 00:08:31.606 "is_configured": false, 00:08:31.606 "data_offset": 0, 00:08:31.606 "data_size": 63488 00:08:31.606 }, 00:08:31.606 { 00:08:31.606 "name": "BaseBdev3", 00:08:31.606 "uuid": "34075943-cb70-41b3-b689-278605b8f1e6", 00:08:31.606 "is_configured": true, 00:08:31.606 "data_offset": 2048, 00:08:31.606 "data_size": 63488 00:08:31.606 }, 00:08:31.606 { 00:08:31.606 "name": "BaseBdev4", 00:08:31.606 "uuid": "b035fd71-3232-44d5-ae12-20961c6e790a", 00:08:31.606 "is_configured": true, 00:08:31.606 "data_offset": 2048, 00:08:31.606 "data_size": 63488 00:08:31.606 } 00:08:31.606 ] 00:08:31.606 }' 00:08:31.606 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.606 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.897 [2024-11-26 19:49:22.777907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.897 "name": "Existed_Raid", 00:08:31.897 "uuid": "b960dc41-85f5-4303-a34e-633c93465b3d", 00:08:31.897 "strip_size_kb": 64, 00:08:31.897 "state": "configuring", 00:08:31.897 "raid_level": "concat", 00:08:31.897 "superblock": true, 00:08:31.897 "num_base_bdevs": 4, 00:08:31.897 "num_base_bdevs_discovered": 2, 00:08:31.897 "num_base_bdevs_operational": 4, 00:08:31.897 "base_bdevs_list": [ 00:08:31.897 { 00:08:31.897 "name": "BaseBdev1", 00:08:31.897 "uuid": "8b5971b4-aaa2-46eb-b0d8-73ce5fbdef0a", 00:08:31.897 "is_configured": true, 00:08:31.897 "data_offset": 2048, 00:08:31.897 "data_size": 63488 00:08:31.897 }, 00:08:31.897 { 00:08:31.897 "name": null, 00:08:31.897 "uuid": "0ccb4f58-4ae2-4c10-b1a8-9dbea69f2a93", 00:08:31.897 "is_configured": false, 00:08:31.897 "data_offset": 0, 00:08:31.897 "data_size": 63488 00:08:31.897 }, 00:08:31.897 { 00:08:31.897 "name": null, 00:08:31.897 "uuid": "34075943-cb70-41b3-b689-278605b8f1e6", 00:08:31.897 "is_configured": false, 00:08:31.897 "data_offset": 0, 00:08:31.897 "data_size": 63488 00:08:31.897 }, 00:08:31.897 { 00:08:31.897 "name": "BaseBdev4", 00:08:31.897 "uuid": "b035fd71-3232-44d5-ae12-20961c6e790a", 00:08:31.897 "is_configured": true, 00:08:31.897 "data_offset": 2048, 00:08:31.897 "data_size": 63488 00:08:31.897 } 00:08:31.897 ] 00:08:31.897 }' 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.897 19:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.469 [2024-11-26 19:49:23.153975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.469 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.470 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.470 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.470 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.470 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.470 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.470 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.470 "name": "Existed_Raid", 00:08:32.470 "uuid": "b960dc41-85f5-4303-a34e-633c93465b3d", 00:08:32.470 "strip_size_kb": 64, 00:08:32.470 "state": "configuring", 00:08:32.470 "raid_level": "concat", 00:08:32.470 "superblock": true, 00:08:32.470 "num_base_bdevs": 4, 00:08:32.470 "num_base_bdevs_discovered": 3, 00:08:32.470 "num_base_bdevs_operational": 4, 00:08:32.470 "base_bdevs_list": [ 00:08:32.470 { 00:08:32.470 "name": "BaseBdev1", 00:08:32.470 "uuid": "8b5971b4-aaa2-46eb-b0d8-73ce5fbdef0a", 00:08:32.470 "is_configured": true, 00:08:32.470 "data_offset": 2048, 00:08:32.470 "data_size": 63488 00:08:32.470 }, 00:08:32.470 { 00:08:32.470 "name": null, 00:08:32.470 "uuid": "0ccb4f58-4ae2-4c10-b1a8-9dbea69f2a93", 00:08:32.470 "is_configured": false, 00:08:32.470 "data_offset": 0, 00:08:32.470 "data_size": 63488 00:08:32.470 }, 00:08:32.470 { 00:08:32.470 "name": "BaseBdev3", 00:08:32.470 "uuid": "34075943-cb70-41b3-b689-278605b8f1e6", 00:08:32.470 "is_configured": true, 00:08:32.470 "data_offset": 2048, 00:08:32.470 "data_size": 63488 00:08:32.470 }, 00:08:32.470 { 00:08:32.470 "name": "BaseBdev4", 00:08:32.470 "uuid": "b035fd71-3232-44d5-ae12-20961c6e790a", 00:08:32.470 "is_configured": true, 00:08:32.470 "data_offset": 2048, 00:08:32.470 "data_size": 63488 00:08:32.470 } 00:08:32.470 ] 00:08:32.470 }' 00:08:32.470 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.470 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.731 [2024-11-26 19:49:23.490072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.731 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.731 "name": "Existed_Raid", 00:08:32.731 "uuid": "b960dc41-85f5-4303-a34e-633c93465b3d", 00:08:32.731 "strip_size_kb": 64, 00:08:32.731 "state": "configuring", 00:08:32.731 "raid_level": "concat", 00:08:32.731 "superblock": true, 00:08:32.731 "num_base_bdevs": 4, 00:08:32.731 "num_base_bdevs_discovered": 2, 00:08:32.731 "num_base_bdevs_operational": 4, 00:08:32.731 "base_bdevs_list": [ 00:08:32.731 { 00:08:32.731 "name": null, 00:08:32.731 "uuid": "8b5971b4-aaa2-46eb-b0d8-73ce5fbdef0a", 00:08:32.731 "is_configured": false, 00:08:32.731 "data_offset": 0, 00:08:32.731 "data_size": 63488 00:08:32.731 }, 00:08:32.731 { 00:08:32.731 "name": null, 00:08:32.731 "uuid": "0ccb4f58-4ae2-4c10-b1a8-9dbea69f2a93", 00:08:32.731 "is_configured": false, 00:08:32.731 "data_offset": 0, 00:08:32.731 "data_size": 63488 00:08:32.731 }, 00:08:32.731 { 00:08:32.731 "name": "BaseBdev3", 00:08:32.731 "uuid": "34075943-cb70-41b3-b689-278605b8f1e6", 00:08:32.731 "is_configured": true, 00:08:32.731 "data_offset": 2048, 00:08:32.731 "data_size": 63488 00:08:32.731 }, 00:08:32.731 { 00:08:32.731 "name": "BaseBdev4", 00:08:32.731 "uuid": "b035fd71-3232-44d5-ae12-20961c6e790a", 00:08:32.731 "is_configured": true, 00:08:32.731 "data_offset": 2048, 00:08:32.731 "data_size": 63488 00:08:32.731 } 00:08:32.731 ] 00:08:32.731 }' 00:08:32.732 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.732 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.993 [2024-11-26 19:49:23.907401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.993 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.253 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.253 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.253 "name": "Existed_Raid", 00:08:33.253 "uuid": "b960dc41-85f5-4303-a34e-633c93465b3d", 00:08:33.253 "strip_size_kb": 64, 00:08:33.253 "state": "configuring", 00:08:33.253 "raid_level": "concat", 00:08:33.253 "superblock": true, 00:08:33.253 "num_base_bdevs": 4, 00:08:33.253 "num_base_bdevs_discovered": 3, 00:08:33.253 "num_base_bdevs_operational": 4, 00:08:33.253 "base_bdevs_list": [ 00:08:33.253 { 00:08:33.253 "name": null, 00:08:33.253 "uuid": "8b5971b4-aaa2-46eb-b0d8-73ce5fbdef0a", 00:08:33.253 "is_configured": false, 00:08:33.253 "data_offset": 0, 00:08:33.253 "data_size": 63488 00:08:33.253 }, 00:08:33.253 { 00:08:33.253 "name": "BaseBdev2", 00:08:33.253 "uuid": "0ccb4f58-4ae2-4c10-b1a8-9dbea69f2a93", 00:08:33.253 "is_configured": true, 00:08:33.253 "data_offset": 2048, 00:08:33.253 "data_size": 63488 00:08:33.253 }, 00:08:33.253 { 00:08:33.253 "name": "BaseBdev3", 00:08:33.253 "uuid": "34075943-cb70-41b3-b689-278605b8f1e6", 00:08:33.253 "is_configured": true, 00:08:33.253 "data_offset": 2048, 00:08:33.253 "data_size": 63488 00:08:33.253 }, 00:08:33.253 { 00:08:33.253 "name": "BaseBdev4", 00:08:33.253 "uuid": "b035fd71-3232-44d5-ae12-20961c6e790a", 00:08:33.253 "is_configured": true, 00:08:33.253 "data_offset": 2048, 00:08:33.253 "data_size": 63488 00:08:33.253 } 00:08:33.253 ] 00:08:33.253 }' 00:08:33.253 19:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.253 19:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8b5971b4-aaa2-46eb-b0d8-73ce5fbdef0a 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.515 [2024-11-26 19:49:24.319966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:33.515 [2024-11-26 19:49:24.320168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:33.515 [2024-11-26 19:49:24.320180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:33.515 [2024-11-26 19:49:24.320460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:33.515 [2024-11-26 19:49:24.320594] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:33.515 [2024-11-26 19:49:24.320628] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:33.515 [2024-11-26 19:49:24.320751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.515 NewBaseBdev 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.515 [ 00:08:33.515 { 00:08:33.515 "name": "NewBaseBdev", 00:08:33.515 "aliases": [ 00:08:33.515 "8b5971b4-aaa2-46eb-b0d8-73ce5fbdef0a" 00:08:33.515 ], 00:08:33.515 "product_name": "Malloc disk", 00:08:33.515 "block_size": 512, 00:08:33.515 "num_blocks": 65536, 00:08:33.515 "uuid": "8b5971b4-aaa2-46eb-b0d8-73ce5fbdef0a", 00:08:33.515 "assigned_rate_limits": { 00:08:33.515 "rw_ios_per_sec": 0, 00:08:33.515 "rw_mbytes_per_sec": 0, 00:08:33.515 "r_mbytes_per_sec": 0, 00:08:33.515 "w_mbytes_per_sec": 0 00:08:33.515 }, 00:08:33.515 "claimed": true, 00:08:33.515 "claim_type": "exclusive_write", 00:08:33.515 "zoned": false, 00:08:33.515 "supported_io_types": { 00:08:33.515 "read": true, 00:08:33.515 "write": true, 00:08:33.515 "unmap": true, 00:08:33.515 "flush": true, 00:08:33.515 "reset": true, 00:08:33.515 "nvme_admin": false, 00:08:33.515 "nvme_io": false, 00:08:33.515 "nvme_io_md": false, 00:08:33.515 "write_zeroes": true, 00:08:33.515 "zcopy": true, 00:08:33.515 "get_zone_info": false, 00:08:33.515 "zone_management": false, 00:08:33.515 "zone_append": false, 00:08:33.515 "compare": false, 00:08:33.515 "compare_and_write": false, 00:08:33.515 "abort": true, 00:08:33.515 "seek_hole": false, 00:08:33.515 "seek_data": false, 00:08:33.515 "copy": true, 00:08:33.515 "nvme_iov_md": false 00:08:33.515 }, 00:08:33.515 "memory_domains": [ 00:08:33.515 { 00:08:33.515 "dma_device_id": "system", 00:08:33.515 "dma_device_type": 1 00:08:33.515 }, 00:08:33.515 { 00:08:33.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.515 "dma_device_type": 2 00:08:33.515 } 00:08:33.515 ], 00:08:33.515 "driver_specific": {} 00:08:33.515 } 00:08:33.515 ] 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.515 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.515 "name": "Existed_Raid", 00:08:33.515 "uuid": "b960dc41-85f5-4303-a34e-633c93465b3d", 00:08:33.515 "strip_size_kb": 64, 00:08:33.515 "state": "online", 00:08:33.515 "raid_level": "concat", 00:08:33.515 "superblock": true, 00:08:33.515 "num_base_bdevs": 4, 00:08:33.515 "num_base_bdevs_discovered": 4, 00:08:33.515 "num_base_bdevs_operational": 4, 00:08:33.515 "base_bdevs_list": [ 00:08:33.515 { 00:08:33.515 "name": "NewBaseBdev", 00:08:33.515 "uuid": "8b5971b4-aaa2-46eb-b0d8-73ce5fbdef0a", 00:08:33.515 "is_configured": true, 00:08:33.515 "data_offset": 2048, 00:08:33.515 "data_size": 63488 00:08:33.515 }, 00:08:33.515 { 00:08:33.515 "name": "BaseBdev2", 00:08:33.515 "uuid": "0ccb4f58-4ae2-4c10-b1a8-9dbea69f2a93", 00:08:33.515 "is_configured": true, 00:08:33.515 "data_offset": 2048, 00:08:33.515 "data_size": 63488 00:08:33.516 }, 00:08:33.516 { 00:08:33.516 "name": "BaseBdev3", 00:08:33.516 "uuid": "34075943-cb70-41b3-b689-278605b8f1e6", 00:08:33.516 "is_configured": true, 00:08:33.516 "data_offset": 2048, 00:08:33.516 "data_size": 63488 00:08:33.516 }, 00:08:33.516 { 00:08:33.516 "name": "BaseBdev4", 00:08:33.516 "uuid": "b035fd71-3232-44d5-ae12-20961c6e790a", 00:08:33.516 "is_configured": true, 00:08:33.516 "data_offset": 2048, 00:08:33.516 "data_size": 63488 00:08:33.516 } 00:08:33.516 ] 00:08:33.516 }' 00:08:33.516 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.516 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.776 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:33.776 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:33.776 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.776 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.776 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.776 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.776 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.776 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:33.776 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.776 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.776 [2024-11-26 19:49:24.668500] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.776 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.776 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.776 "name": "Existed_Raid", 00:08:33.776 "aliases": [ 00:08:33.776 "b960dc41-85f5-4303-a34e-633c93465b3d" 00:08:33.776 ], 00:08:33.776 "product_name": "Raid Volume", 00:08:33.776 "block_size": 512, 00:08:33.776 "num_blocks": 253952, 00:08:33.776 "uuid": "b960dc41-85f5-4303-a34e-633c93465b3d", 00:08:33.776 "assigned_rate_limits": { 00:08:33.776 "rw_ios_per_sec": 0, 00:08:33.776 "rw_mbytes_per_sec": 0, 00:08:33.776 "r_mbytes_per_sec": 0, 00:08:33.776 "w_mbytes_per_sec": 0 00:08:33.776 }, 00:08:33.776 "claimed": false, 00:08:33.776 "zoned": false, 00:08:33.776 "supported_io_types": { 00:08:33.776 "read": true, 00:08:33.776 "write": true, 00:08:33.776 "unmap": true, 00:08:33.776 "flush": true, 00:08:33.776 "reset": true, 00:08:33.776 "nvme_admin": false, 00:08:33.776 "nvme_io": false, 00:08:33.776 "nvme_io_md": false, 00:08:33.776 "write_zeroes": true, 00:08:33.776 "zcopy": false, 00:08:33.776 "get_zone_info": false, 00:08:33.776 "zone_management": false, 00:08:33.776 "zone_append": false, 00:08:33.776 "compare": false, 00:08:33.776 "compare_and_write": false, 00:08:33.776 "abort": false, 00:08:33.776 "seek_hole": false, 00:08:33.776 "seek_data": false, 00:08:33.776 "copy": false, 00:08:33.776 "nvme_iov_md": false 00:08:33.776 }, 00:08:33.776 "memory_domains": [ 00:08:33.776 { 00:08:33.776 "dma_device_id": "system", 00:08:33.776 "dma_device_type": 1 00:08:33.776 }, 00:08:33.776 { 00:08:33.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.776 "dma_device_type": 2 00:08:33.776 }, 00:08:33.776 { 00:08:33.776 "dma_device_id": "system", 00:08:33.776 "dma_device_type": 1 00:08:33.776 }, 00:08:33.776 { 00:08:33.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.776 "dma_device_type": 2 00:08:33.776 }, 00:08:33.776 { 00:08:33.776 "dma_device_id": "system", 00:08:33.776 "dma_device_type": 1 00:08:33.776 }, 00:08:33.776 { 00:08:33.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.776 "dma_device_type": 2 00:08:33.776 }, 00:08:33.776 { 00:08:33.776 "dma_device_id": "system", 00:08:33.776 "dma_device_type": 1 00:08:33.776 }, 00:08:33.776 { 00:08:33.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.776 "dma_device_type": 2 00:08:33.776 } 00:08:33.776 ], 00:08:33.776 "driver_specific": { 00:08:33.776 "raid": { 00:08:33.776 "uuid": "b960dc41-85f5-4303-a34e-633c93465b3d", 00:08:33.776 "strip_size_kb": 64, 00:08:33.776 "state": "online", 00:08:33.776 "raid_level": "concat", 00:08:33.776 "superblock": true, 00:08:33.776 "num_base_bdevs": 4, 00:08:33.776 "num_base_bdevs_discovered": 4, 00:08:33.776 "num_base_bdevs_operational": 4, 00:08:33.776 "base_bdevs_list": [ 00:08:33.776 { 00:08:33.776 "name": "NewBaseBdev", 00:08:33.776 "uuid": "8b5971b4-aaa2-46eb-b0d8-73ce5fbdef0a", 00:08:33.776 "is_configured": true, 00:08:33.776 "data_offset": 2048, 00:08:33.776 "data_size": 63488 00:08:33.776 }, 00:08:33.776 { 00:08:33.776 "name": "BaseBdev2", 00:08:33.776 "uuid": "0ccb4f58-4ae2-4c10-b1a8-9dbea69f2a93", 00:08:33.776 "is_configured": true, 00:08:33.776 "data_offset": 2048, 00:08:33.776 "data_size": 63488 00:08:33.776 }, 00:08:33.776 { 00:08:33.776 "name": "BaseBdev3", 00:08:33.776 "uuid": "34075943-cb70-41b3-b689-278605b8f1e6", 00:08:33.776 "is_configured": true, 00:08:33.776 "data_offset": 2048, 00:08:33.776 "data_size": 63488 00:08:33.776 }, 00:08:33.776 { 00:08:33.776 "name": "BaseBdev4", 00:08:33.776 "uuid": "b035fd71-3232-44d5-ae12-20961c6e790a", 00:08:33.776 "is_configured": true, 00:08:33.776 "data_offset": 2048, 00:08:33.776 "data_size": 63488 00:08:33.776 } 00:08:33.776 ] 00:08:33.776 } 00:08:33.776 } 00:08:33.776 }' 00:08:33.776 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:34.037 BaseBdev2 00:08:34.037 BaseBdev3 00:08:34.037 BaseBdev4' 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.037 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.037 [2024-11-26 19:49:24.888122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.037 [2024-11-26 19:49:24.888150] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.037 [2024-11-26 19:49:24.888226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.037 [2024-11-26 19:49:24.888303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.038 [2024-11-26 19:49:24.888313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:34.038 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.038 19:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70109 00:08:34.038 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70109 ']' 00:08:34.038 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70109 00:08:34.038 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:34.038 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.038 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70109 00:08:34.038 killing process with pid 70109 00:08:34.038 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.038 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.038 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70109' 00:08:34.038 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70109 00:08:34.038 [2024-11-26 19:49:24.918227] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.038 19:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70109 00:08:34.298 [2024-11-26 19:49:25.174287] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.239 19:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:35.239 00:08:35.239 real 0m8.537s 00:08:35.239 user 0m13.438s 00:08:35.239 sys 0m1.487s 00:08:35.239 19:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.239 19:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.239 ************************************ 00:08:35.239 END TEST raid_state_function_test_sb 00:08:35.239 ************************************ 00:08:35.239 19:49:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:08:35.239 19:49:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:35.239 19:49:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.239 19:49:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.239 ************************************ 00:08:35.239 START TEST raid_superblock_test 00:08:35.239 ************************************ 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:35.239 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:35.240 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:35.240 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:35.240 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70751 00:08:35.240 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70751 00:08:35.240 19:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70751 ']' 00:08:35.240 19:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:35.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.240 19:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.240 19:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.240 19:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.240 19:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.240 19:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.240 [2024-11-26 19:49:26.054722] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:35.240 [2024-11-26 19:49:26.054843] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70751 ] 00:08:35.501 [2024-11-26 19:49:26.215902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.501 [2024-11-26 19:49:26.327145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.790 [2024-11-26 19:49:26.475293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.790 [2024-11-26 19:49:26.475335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.051 malloc1 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.051 [2024-11-26 19:49:26.941060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:36.051 [2024-11-26 19:49:26.941259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.051 [2024-11-26 19:49:26.941309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:36.051 [2024-11-26 19:49:26.941380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.051 [2024-11-26 19:49:26.943794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.051 [2024-11-26 19:49:26.943917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:36.051 pt1 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.051 malloc2 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.051 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.312 [2024-11-26 19:49:26.987892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:36.312 [2024-11-26 19:49:26.987965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.312 [2024-11-26 19:49:26.987995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:36.312 [2024-11-26 19:49:26.988004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.312 [2024-11-26 19:49:26.990404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.312 [2024-11-26 19:49:26.990439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:36.312 pt2 00:08:36.312 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.312 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:36.312 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:36.312 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:36.312 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:36.312 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:36.312 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:36.312 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:36.312 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:36.312 19:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:36.312 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.313 19:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.313 malloc3 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.313 [2024-11-26 19:49:27.050489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:36.313 [2024-11-26 19:49:27.050564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.313 [2024-11-26 19:49:27.050589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:36.313 [2024-11-26 19:49:27.050598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.313 [2024-11-26 19:49:27.053041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.313 [2024-11-26 19:49:27.053083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:36.313 pt3 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.313 malloc4 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.313 [2024-11-26 19:49:27.094422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:08:36.313 [2024-11-26 19:49:27.094482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.313 [2024-11-26 19:49:27.094501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:36.313 [2024-11-26 19:49:27.094512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.313 [2024-11-26 19:49:27.096898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.313 [2024-11-26 19:49:27.096936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:08:36.313 pt4 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.313 [2024-11-26 19:49:27.102456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:36.313 [2024-11-26 19:49:27.104632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:36.313 [2024-11-26 19:49:27.104725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:36.313 [2024-11-26 19:49:27.104775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:08:36.313 [2024-11-26 19:49:27.104974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:36.313 [2024-11-26 19:49:27.104985] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:36.313 [2024-11-26 19:49:27.105262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:36.313 [2024-11-26 19:49:27.105447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:36.313 [2024-11-26 19:49:27.105460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:36.313 [2024-11-26 19:49:27.105603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.313 "name": "raid_bdev1", 00:08:36.313 "uuid": "7bcbfa6b-816f-487d-b59f-b7ef812adc3d", 00:08:36.313 "strip_size_kb": 64, 00:08:36.313 "state": "online", 00:08:36.313 "raid_level": "concat", 00:08:36.313 "superblock": true, 00:08:36.313 "num_base_bdevs": 4, 00:08:36.313 "num_base_bdevs_discovered": 4, 00:08:36.313 "num_base_bdevs_operational": 4, 00:08:36.313 "base_bdevs_list": [ 00:08:36.313 { 00:08:36.313 "name": "pt1", 00:08:36.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:36.313 "is_configured": true, 00:08:36.313 "data_offset": 2048, 00:08:36.313 "data_size": 63488 00:08:36.313 }, 00:08:36.313 { 00:08:36.313 "name": "pt2", 00:08:36.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:36.313 "is_configured": true, 00:08:36.313 "data_offset": 2048, 00:08:36.313 "data_size": 63488 00:08:36.313 }, 00:08:36.313 { 00:08:36.313 "name": "pt3", 00:08:36.313 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:36.313 "is_configured": true, 00:08:36.313 "data_offset": 2048, 00:08:36.313 "data_size": 63488 00:08:36.313 }, 00:08:36.313 { 00:08:36.313 "name": "pt4", 00:08:36.313 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:36.313 "is_configured": true, 00:08:36.313 "data_offset": 2048, 00:08:36.313 "data_size": 63488 00:08:36.313 } 00:08:36.313 ] 00:08:36.313 }' 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.313 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.575 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:36.575 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:36.575 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:36.575 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:36.575 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:36.575 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:36.575 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:36.575 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:36.575 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.575 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.575 [2024-11-26 19:49:27.426864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.575 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.575 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.575 "name": "raid_bdev1", 00:08:36.575 "aliases": [ 00:08:36.575 "7bcbfa6b-816f-487d-b59f-b7ef812adc3d" 00:08:36.575 ], 00:08:36.575 "product_name": "Raid Volume", 00:08:36.575 "block_size": 512, 00:08:36.575 "num_blocks": 253952, 00:08:36.575 "uuid": "7bcbfa6b-816f-487d-b59f-b7ef812adc3d", 00:08:36.575 "assigned_rate_limits": { 00:08:36.575 "rw_ios_per_sec": 0, 00:08:36.575 "rw_mbytes_per_sec": 0, 00:08:36.575 "r_mbytes_per_sec": 0, 00:08:36.575 "w_mbytes_per_sec": 0 00:08:36.575 }, 00:08:36.575 "claimed": false, 00:08:36.575 "zoned": false, 00:08:36.575 "supported_io_types": { 00:08:36.575 "read": true, 00:08:36.575 "write": true, 00:08:36.575 "unmap": true, 00:08:36.575 "flush": true, 00:08:36.575 "reset": true, 00:08:36.575 "nvme_admin": false, 00:08:36.575 "nvme_io": false, 00:08:36.575 "nvme_io_md": false, 00:08:36.575 "write_zeroes": true, 00:08:36.575 "zcopy": false, 00:08:36.575 "get_zone_info": false, 00:08:36.575 "zone_management": false, 00:08:36.575 "zone_append": false, 00:08:36.575 "compare": false, 00:08:36.575 "compare_and_write": false, 00:08:36.575 "abort": false, 00:08:36.575 "seek_hole": false, 00:08:36.575 "seek_data": false, 00:08:36.575 "copy": false, 00:08:36.575 "nvme_iov_md": false 00:08:36.575 }, 00:08:36.575 "memory_domains": [ 00:08:36.575 { 00:08:36.575 "dma_device_id": "system", 00:08:36.575 "dma_device_type": 1 00:08:36.575 }, 00:08:36.575 { 00:08:36.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.575 "dma_device_type": 2 00:08:36.575 }, 00:08:36.575 { 00:08:36.575 "dma_device_id": "system", 00:08:36.575 "dma_device_type": 1 00:08:36.575 }, 00:08:36.575 { 00:08:36.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.575 "dma_device_type": 2 00:08:36.575 }, 00:08:36.575 { 00:08:36.575 "dma_device_id": "system", 00:08:36.575 "dma_device_type": 1 00:08:36.575 }, 00:08:36.575 { 00:08:36.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.575 "dma_device_type": 2 00:08:36.575 }, 00:08:36.575 { 00:08:36.575 "dma_device_id": "system", 00:08:36.575 "dma_device_type": 1 00:08:36.575 }, 00:08:36.575 { 00:08:36.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.575 "dma_device_type": 2 00:08:36.575 } 00:08:36.575 ], 00:08:36.575 "driver_specific": { 00:08:36.575 "raid": { 00:08:36.575 "uuid": "7bcbfa6b-816f-487d-b59f-b7ef812adc3d", 00:08:36.575 "strip_size_kb": 64, 00:08:36.576 "state": "online", 00:08:36.576 "raid_level": "concat", 00:08:36.576 "superblock": true, 00:08:36.576 "num_base_bdevs": 4, 00:08:36.576 "num_base_bdevs_discovered": 4, 00:08:36.576 "num_base_bdevs_operational": 4, 00:08:36.576 "base_bdevs_list": [ 00:08:36.576 { 00:08:36.576 "name": "pt1", 00:08:36.576 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:36.576 "is_configured": true, 00:08:36.576 "data_offset": 2048, 00:08:36.576 "data_size": 63488 00:08:36.576 }, 00:08:36.576 { 00:08:36.576 "name": "pt2", 00:08:36.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:36.576 "is_configured": true, 00:08:36.576 "data_offset": 2048, 00:08:36.576 "data_size": 63488 00:08:36.576 }, 00:08:36.576 { 00:08:36.576 "name": "pt3", 00:08:36.576 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:36.576 "is_configured": true, 00:08:36.576 "data_offset": 2048, 00:08:36.576 "data_size": 63488 00:08:36.576 }, 00:08:36.576 { 00:08:36.576 "name": "pt4", 00:08:36.576 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:36.576 "is_configured": true, 00:08:36.576 "data_offset": 2048, 00:08:36.576 "data_size": 63488 00:08:36.576 } 00:08:36.576 ] 00:08:36.576 } 00:08:36.576 } 00:08:36.576 }' 00:08:36.576 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.576 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:36.576 pt2 00:08:36.576 pt3 00:08:36.576 pt4' 00:08:36.576 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.838 [2024-11-26 19:49:27.654883] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7bcbfa6b-816f-487d-b59f-b7ef812adc3d 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7bcbfa6b-816f-487d-b59f-b7ef812adc3d ']' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.838 [2024-11-26 19:49:27.682532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:36.838 [2024-11-26 19:49:27.682555] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.838 [2024-11-26 19:49:27.682634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.838 [2024-11-26 19:49:27.682712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.838 [2024-11-26 19:49:27.682728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.838 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.100 [2024-11-26 19:49:27.794606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:37.100 [2024-11-26 19:49:27.796662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:37.100 [2024-11-26 19:49:27.796713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:37.100 [2024-11-26 19:49:27.796751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:08:37.100 [2024-11-26 19:49:27.796803] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:37.100 [2024-11-26 19:49:27.796859] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:37.100 [2024-11-26 19:49:27.796879] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:37.100 [2024-11-26 19:49:27.796899] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:08:37.100 [2024-11-26 19:49:27.796913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:37.100 [2024-11-26 19:49:27.796926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:37.100 request: 00:08:37.100 { 00:08:37.100 "name": "raid_bdev1", 00:08:37.100 "raid_level": "concat", 00:08:37.100 "base_bdevs": [ 00:08:37.100 "malloc1", 00:08:37.100 "malloc2", 00:08:37.100 "malloc3", 00:08:37.100 "malloc4" 00:08:37.100 ], 00:08:37.100 "strip_size_kb": 64, 00:08:37.100 "superblock": false, 00:08:37.100 "method": "bdev_raid_create", 00:08:37.100 "req_id": 1 00:08:37.100 } 00:08:37.100 Got JSON-RPC error response 00:08:37.100 response: 00:08:37.100 { 00:08:37.100 "code": -17, 00:08:37.100 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:37.100 } 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.100 [2024-11-26 19:49:27.842570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:37.100 [2024-11-26 19:49:27.842725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.100 [2024-11-26 19:49:27.842751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:37.100 [2024-11-26 19:49:27.842761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.100 [2024-11-26 19:49:27.845113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.100 [2024-11-26 19:49:27.845149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:37.100 [2024-11-26 19:49:27.845229] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:37.100 [2024-11-26 19:49:27.845284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:37.100 pt1 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.100 "name": "raid_bdev1", 00:08:37.100 "uuid": "7bcbfa6b-816f-487d-b59f-b7ef812adc3d", 00:08:37.100 "strip_size_kb": 64, 00:08:37.100 "state": "configuring", 00:08:37.100 "raid_level": "concat", 00:08:37.100 "superblock": true, 00:08:37.100 "num_base_bdevs": 4, 00:08:37.100 "num_base_bdevs_discovered": 1, 00:08:37.100 "num_base_bdevs_operational": 4, 00:08:37.100 "base_bdevs_list": [ 00:08:37.100 { 00:08:37.100 "name": "pt1", 00:08:37.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:37.100 "is_configured": true, 00:08:37.100 "data_offset": 2048, 00:08:37.100 "data_size": 63488 00:08:37.100 }, 00:08:37.100 { 00:08:37.100 "name": null, 00:08:37.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:37.100 "is_configured": false, 00:08:37.100 "data_offset": 2048, 00:08:37.100 "data_size": 63488 00:08:37.100 }, 00:08:37.100 { 00:08:37.100 "name": null, 00:08:37.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:37.100 "is_configured": false, 00:08:37.100 "data_offset": 2048, 00:08:37.100 "data_size": 63488 00:08:37.100 }, 00:08:37.100 { 00:08:37.100 "name": null, 00:08:37.100 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:37.100 "is_configured": false, 00:08:37.100 "data_offset": 2048, 00:08:37.100 "data_size": 63488 00:08:37.100 } 00:08:37.100 ] 00:08:37.100 }' 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.100 19:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.362 [2024-11-26 19:49:28.190692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:37.362 [2024-11-26 19:49:28.190766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.362 [2024-11-26 19:49:28.190786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:08:37.362 [2024-11-26 19:49:28.190797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.362 [2024-11-26 19:49:28.191265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.362 [2024-11-26 19:49:28.191281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:37.362 [2024-11-26 19:49:28.191373] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:37.362 [2024-11-26 19:49:28.191397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:37.362 pt2 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.362 [2024-11-26 19:49:28.198701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.362 "name": "raid_bdev1", 00:08:37.362 "uuid": "7bcbfa6b-816f-487d-b59f-b7ef812adc3d", 00:08:37.362 "strip_size_kb": 64, 00:08:37.362 "state": "configuring", 00:08:37.362 "raid_level": "concat", 00:08:37.362 "superblock": true, 00:08:37.362 "num_base_bdevs": 4, 00:08:37.362 "num_base_bdevs_discovered": 1, 00:08:37.362 "num_base_bdevs_operational": 4, 00:08:37.362 "base_bdevs_list": [ 00:08:37.362 { 00:08:37.362 "name": "pt1", 00:08:37.362 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:37.362 "is_configured": true, 00:08:37.362 "data_offset": 2048, 00:08:37.362 "data_size": 63488 00:08:37.362 }, 00:08:37.362 { 00:08:37.362 "name": null, 00:08:37.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:37.362 "is_configured": false, 00:08:37.362 "data_offset": 0, 00:08:37.362 "data_size": 63488 00:08:37.362 }, 00:08:37.362 { 00:08:37.362 "name": null, 00:08:37.362 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:37.362 "is_configured": false, 00:08:37.362 "data_offset": 2048, 00:08:37.362 "data_size": 63488 00:08:37.362 }, 00:08:37.362 { 00:08:37.362 "name": null, 00:08:37.362 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:37.362 "is_configured": false, 00:08:37.362 "data_offset": 2048, 00:08:37.362 "data_size": 63488 00:08:37.362 } 00:08:37.362 ] 00:08:37.362 }' 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.362 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.623 [2024-11-26 19:49:28.530776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:37.623 [2024-11-26 19:49:28.530849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.623 [2024-11-26 19:49:28.530871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:08:37.623 [2024-11-26 19:49:28.530880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.623 [2024-11-26 19:49:28.531390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.623 [2024-11-26 19:49:28.531405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:37.623 [2024-11-26 19:49:28.531491] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:37.623 [2024-11-26 19:49:28.531513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:37.623 pt2 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.623 [2024-11-26 19:49:28.542772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:37.623 [2024-11-26 19:49:28.542826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.623 [2024-11-26 19:49:28.542844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:08:37.623 [2024-11-26 19:49:28.542852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.623 [2024-11-26 19:49:28.543289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.623 [2024-11-26 19:49:28.543302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:37.623 [2024-11-26 19:49:28.543392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:37.623 [2024-11-26 19:49:28.543417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:37.623 pt3 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.623 [2024-11-26 19:49:28.550727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:08:37.623 [2024-11-26 19:49:28.550776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.623 [2024-11-26 19:49:28.550793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:08:37.623 [2024-11-26 19:49:28.550802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.623 [2024-11-26 19:49:28.551246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.623 [2024-11-26 19:49:28.551259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:08:37.623 [2024-11-26 19:49:28.551324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:08:37.623 [2024-11-26 19:49:28.551364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:08:37.623 [2024-11-26 19:49:28.551506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:37.623 [2024-11-26 19:49:28.551515] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:37.623 [2024-11-26 19:49:28.551760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:37.623 [2024-11-26 19:49:28.551899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:37.623 [2024-11-26 19:49:28.551910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:37.623 [2024-11-26 19:49:28.552032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.623 pt4 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.623 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.884 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.884 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.884 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.884 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.884 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.884 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.884 "name": "raid_bdev1", 00:08:37.884 "uuid": "7bcbfa6b-816f-487d-b59f-b7ef812adc3d", 00:08:37.884 "strip_size_kb": 64, 00:08:37.884 "state": "online", 00:08:37.884 "raid_level": "concat", 00:08:37.884 "superblock": true, 00:08:37.884 "num_base_bdevs": 4, 00:08:37.884 "num_base_bdevs_discovered": 4, 00:08:37.884 "num_base_bdevs_operational": 4, 00:08:37.884 "base_bdevs_list": [ 00:08:37.884 { 00:08:37.884 "name": "pt1", 00:08:37.884 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:37.884 "is_configured": true, 00:08:37.884 "data_offset": 2048, 00:08:37.884 "data_size": 63488 00:08:37.884 }, 00:08:37.884 { 00:08:37.884 "name": "pt2", 00:08:37.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:37.884 "is_configured": true, 00:08:37.884 "data_offset": 2048, 00:08:37.884 "data_size": 63488 00:08:37.884 }, 00:08:37.884 { 00:08:37.884 "name": "pt3", 00:08:37.884 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:37.884 "is_configured": true, 00:08:37.884 "data_offset": 2048, 00:08:37.884 "data_size": 63488 00:08:37.884 }, 00:08:37.884 { 00:08:37.884 "name": "pt4", 00:08:37.884 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:37.884 "is_configured": true, 00:08:37.884 "data_offset": 2048, 00:08:37.884 "data_size": 63488 00:08:37.884 } 00:08:37.884 ] 00:08:37.884 }' 00:08:37.884 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.884 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 [2024-11-26 19:49:28.907238] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:38.145 "name": "raid_bdev1", 00:08:38.145 "aliases": [ 00:08:38.145 "7bcbfa6b-816f-487d-b59f-b7ef812adc3d" 00:08:38.145 ], 00:08:38.145 "product_name": "Raid Volume", 00:08:38.145 "block_size": 512, 00:08:38.145 "num_blocks": 253952, 00:08:38.145 "uuid": "7bcbfa6b-816f-487d-b59f-b7ef812adc3d", 00:08:38.145 "assigned_rate_limits": { 00:08:38.145 "rw_ios_per_sec": 0, 00:08:38.145 "rw_mbytes_per_sec": 0, 00:08:38.145 "r_mbytes_per_sec": 0, 00:08:38.145 "w_mbytes_per_sec": 0 00:08:38.145 }, 00:08:38.145 "claimed": false, 00:08:38.145 "zoned": false, 00:08:38.145 "supported_io_types": { 00:08:38.145 "read": true, 00:08:38.145 "write": true, 00:08:38.145 "unmap": true, 00:08:38.145 "flush": true, 00:08:38.145 "reset": true, 00:08:38.145 "nvme_admin": false, 00:08:38.145 "nvme_io": false, 00:08:38.145 "nvme_io_md": false, 00:08:38.145 "write_zeroes": true, 00:08:38.145 "zcopy": false, 00:08:38.145 "get_zone_info": false, 00:08:38.145 "zone_management": false, 00:08:38.145 "zone_append": false, 00:08:38.145 "compare": false, 00:08:38.145 "compare_and_write": false, 00:08:38.145 "abort": false, 00:08:38.145 "seek_hole": false, 00:08:38.145 "seek_data": false, 00:08:38.145 "copy": false, 00:08:38.145 "nvme_iov_md": false 00:08:38.145 }, 00:08:38.145 "memory_domains": [ 00:08:38.145 { 00:08:38.145 "dma_device_id": "system", 00:08:38.145 "dma_device_type": 1 00:08:38.145 }, 00:08:38.145 { 00:08:38.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.145 "dma_device_type": 2 00:08:38.145 }, 00:08:38.145 { 00:08:38.145 "dma_device_id": "system", 00:08:38.145 "dma_device_type": 1 00:08:38.145 }, 00:08:38.145 { 00:08:38.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.145 "dma_device_type": 2 00:08:38.145 }, 00:08:38.145 { 00:08:38.145 "dma_device_id": "system", 00:08:38.145 "dma_device_type": 1 00:08:38.145 }, 00:08:38.145 { 00:08:38.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.145 "dma_device_type": 2 00:08:38.145 }, 00:08:38.145 { 00:08:38.145 "dma_device_id": "system", 00:08:38.145 "dma_device_type": 1 00:08:38.145 }, 00:08:38.145 { 00:08:38.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.145 "dma_device_type": 2 00:08:38.145 } 00:08:38.145 ], 00:08:38.145 "driver_specific": { 00:08:38.145 "raid": { 00:08:38.145 "uuid": "7bcbfa6b-816f-487d-b59f-b7ef812adc3d", 00:08:38.145 "strip_size_kb": 64, 00:08:38.145 "state": "online", 00:08:38.145 "raid_level": "concat", 00:08:38.145 "superblock": true, 00:08:38.145 "num_base_bdevs": 4, 00:08:38.145 "num_base_bdevs_discovered": 4, 00:08:38.145 "num_base_bdevs_operational": 4, 00:08:38.145 "base_bdevs_list": [ 00:08:38.145 { 00:08:38.145 "name": "pt1", 00:08:38.145 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:38.145 "is_configured": true, 00:08:38.145 "data_offset": 2048, 00:08:38.145 "data_size": 63488 00:08:38.145 }, 00:08:38.145 { 00:08:38.145 "name": "pt2", 00:08:38.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:38.145 "is_configured": true, 00:08:38.145 "data_offset": 2048, 00:08:38.145 "data_size": 63488 00:08:38.145 }, 00:08:38.145 { 00:08:38.145 "name": "pt3", 00:08:38.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:38.145 "is_configured": true, 00:08:38.145 "data_offset": 2048, 00:08:38.145 "data_size": 63488 00:08:38.145 }, 00:08:38.145 { 00:08:38.145 "name": "pt4", 00:08:38.145 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:38.145 "is_configured": true, 00:08:38.145 "data_offset": 2048, 00:08:38.145 "data_size": 63488 00:08:38.145 } 00:08:38.145 ] 00:08:38.145 } 00:08:38.145 } 00:08:38.145 }' 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:38.145 pt2 00:08:38.145 pt3 00:08:38.145 pt4' 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.145 19:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.145 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.146 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.146 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:38.146 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.146 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.146 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.146 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.406 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.406 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.406 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.406 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:08:38.406 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.406 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.406 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.406 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.406 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.406 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.406 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:38.406 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.406 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:38.407 [2024-11-26 19:49:29.135236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7bcbfa6b-816f-487d-b59f-b7ef812adc3d '!=' 7bcbfa6b-816f-487d-b59f-b7ef812adc3d ']' 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70751 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70751 ']' 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70751 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70751 00:08:38.407 killing process with pid 70751 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70751' 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70751 00:08:38.407 [2024-11-26 19:49:29.190708] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.407 19:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70751 00:08:38.407 [2024-11-26 19:49:29.190806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.407 [2024-11-26 19:49:29.190903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.407 [2024-11-26 19:49:29.190915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:38.668 [2024-11-26 19:49:29.463414] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.606 19:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:39.606 00:08:39.606 real 0m4.340s 00:08:39.606 user 0m6.080s 00:08:39.606 sys 0m0.716s 00:08:39.606 ************************************ 00:08:39.606 END TEST raid_superblock_test 00:08:39.606 ************************************ 00:08:39.606 19:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.606 19:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.606 19:49:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:08:39.606 19:49:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:39.606 19:49:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.606 19:49:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.606 ************************************ 00:08:39.606 START TEST raid_read_error_test 00:08:39.606 ************************************ 00:08:39.606 19:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:08:39.606 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:39.606 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:08:39.606 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:39.606 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:39.606 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.606 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:39.606 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.606 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.606 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tOfYbFTsQj 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71000 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71000 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71000 ']' 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.607 19:49:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.607 [2024-11-26 19:49:30.492487] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:39.607 [2024-11-26 19:49:30.492641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71000 ] 00:08:39.863 [2024-11-26 19:49:30.648705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.863 [2024-11-26 19:49:30.767939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.120 [2024-11-26 19:49:30.915360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.120 [2024-11-26 19:49:30.915430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 BaseBdev1_malloc 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 true 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 [2024-11-26 19:49:31.383134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:40.685 [2024-11-26 19:49:31.383194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.685 [2024-11-26 19:49:31.383214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:40.685 [2024-11-26 19:49:31.383225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.685 [2024-11-26 19:49:31.385583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.685 [2024-11-26 19:49:31.385621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:40.685 BaseBdev1 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 BaseBdev2_malloc 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 true 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 [2024-11-26 19:49:31.429211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:40.685 [2024-11-26 19:49:31.429264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.685 [2024-11-26 19:49:31.429281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:40.685 [2024-11-26 19:49:31.429292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.685 [2024-11-26 19:49:31.431579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.685 [2024-11-26 19:49:31.431618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:40.685 BaseBdev2 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 BaseBdev3_malloc 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 true 00:08:40.685 BaseBdev3 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 [2024-11-26 19:49:31.492388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:40.685 [2024-11-26 19:49:31.492436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.685 [2024-11-26 19:49:31.492454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:40.685 [2024-11-26 19:49:31.492465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.685 [2024-11-26 19:49:31.494717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.685 [2024-11-26 19:49:31.494746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 BaseBdev4_malloc 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 true 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.685 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 [2024-11-26 19:49:31.538545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:08:40.686 [2024-11-26 19:49:31.538593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.686 [2024-11-26 19:49:31.538610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:40.686 [2024-11-26 19:49:31.538621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.686 [2024-11-26 19:49:31.540852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.686 [2024-11-26 19:49:31.540890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:08:40.686 BaseBdev4 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.686 [2024-11-26 19:49:31.546620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.686 [2024-11-26 19:49:31.548583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.686 [2024-11-26 19:49:31.548664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.686 [2024-11-26 19:49:31.548730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:40.686 [2024-11-26 19:49:31.548957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:08:40.686 [2024-11-26 19:49:31.548975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:40.686 [2024-11-26 19:49:31.549225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:08:40.686 [2024-11-26 19:49:31.549393] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:08:40.686 [2024-11-26 19:49:31.549405] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:08:40.686 [2024-11-26 19:49:31.549549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.686 "name": "raid_bdev1", 00:08:40.686 "uuid": "c7cf2d9e-9726-4cc3-bfe3-a9f7685eaf8f", 00:08:40.686 "strip_size_kb": 64, 00:08:40.686 "state": "online", 00:08:40.686 "raid_level": "concat", 00:08:40.686 "superblock": true, 00:08:40.686 "num_base_bdevs": 4, 00:08:40.686 "num_base_bdevs_discovered": 4, 00:08:40.686 "num_base_bdevs_operational": 4, 00:08:40.686 "base_bdevs_list": [ 00:08:40.686 { 00:08:40.686 "name": "BaseBdev1", 00:08:40.686 "uuid": "a906d87f-47ac-5da7-b1cd-ef77d501da98", 00:08:40.686 "is_configured": true, 00:08:40.686 "data_offset": 2048, 00:08:40.686 "data_size": 63488 00:08:40.686 }, 00:08:40.686 { 00:08:40.686 "name": "BaseBdev2", 00:08:40.686 "uuid": "70f5a7fd-c3a9-5469-b387-c973dd10258d", 00:08:40.686 "is_configured": true, 00:08:40.686 "data_offset": 2048, 00:08:40.686 "data_size": 63488 00:08:40.686 }, 00:08:40.686 { 00:08:40.686 "name": "BaseBdev3", 00:08:40.686 "uuid": "c36ae19d-4723-5fcf-a435-3d7bdc624712", 00:08:40.686 "is_configured": true, 00:08:40.686 "data_offset": 2048, 00:08:40.686 "data_size": 63488 00:08:40.686 }, 00:08:40.686 { 00:08:40.686 "name": "BaseBdev4", 00:08:40.686 "uuid": "42150f5c-a242-50eb-9ddc-0182f9fb30d3", 00:08:40.686 "is_configured": true, 00:08:40.686 "data_offset": 2048, 00:08:40.686 "data_size": 63488 00:08:40.686 } 00:08:40.686 ] 00:08:40.686 }' 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.686 19:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.942 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:40.942 19:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:41.199 [2024-11-26 19:49:31.931778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.130 "name": "raid_bdev1", 00:08:42.130 "uuid": "c7cf2d9e-9726-4cc3-bfe3-a9f7685eaf8f", 00:08:42.130 "strip_size_kb": 64, 00:08:42.130 "state": "online", 00:08:42.130 "raid_level": "concat", 00:08:42.130 "superblock": true, 00:08:42.130 "num_base_bdevs": 4, 00:08:42.130 "num_base_bdevs_discovered": 4, 00:08:42.130 "num_base_bdevs_operational": 4, 00:08:42.130 "base_bdevs_list": [ 00:08:42.130 { 00:08:42.130 "name": "BaseBdev1", 00:08:42.130 "uuid": "a906d87f-47ac-5da7-b1cd-ef77d501da98", 00:08:42.130 "is_configured": true, 00:08:42.130 "data_offset": 2048, 00:08:42.130 "data_size": 63488 00:08:42.130 }, 00:08:42.130 { 00:08:42.130 "name": "BaseBdev2", 00:08:42.130 "uuid": "70f5a7fd-c3a9-5469-b387-c973dd10258d", 00:08:42.130 "is_configured": true, 00:08:42.130 "data_offset": 2048, 00:08:42.130 "data_size": 63488 00:08:42.130 }, 00:08:42.130 { 00:08:42.130 "name": "BaseBdev3", 00:08:42.130 "uuid": "c36ae19d-4723-5fcf-a435-3d7bdc624712", 00:08:42.130 "is_configured": true, 00:08:42.130 "data_offset": 2048, 00:08:42.130 "data_size": 63488 00:08:42.130 }, 00:08:42.130 { 00:08:42.130 "name": "BaseBdev4", 00:08:42.130 "uuid": "42150f5c-a242-50eb-9ddc-0182f9fb30d3", 00:08:42.130 "is_configured": true, 00:08:42.130 "data_offset": 2048, 00:08:42.130 "data_size": 63488 00:08:42.130 } 00:08:42.130 ] 00:08:42.130 }' 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.130 19:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.389 [2024-11-26 19:49:33.157999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.389 [2024-11-26 19:49:33.158037] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.389 [2024-11-26 19:49:33.161156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.389 [2024-11-26 19:49:33.161225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.389 [2024-11-26 19:49:33.161271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.389 [2024-11-26 19:49:33.161285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:08:42.389 { 00:08:42.389 "results": [ 00:08:42.389 { 00:08:42.389 "job": "raid_bdev1", 00:08:42.389 "core_mask": "0x1", 00:08:42.389 "workload": "randrw", 00:08:42.389 "percentage": 50, 00:08:42.389 "status": "finished", 00:08:42.389 "queue_depth": 1, 00:08:42.389 "io_size": 131072, 00:08:42.389 "runtime": 1.224204, 00:08:42.389 "iops": 13947.021901578495, 00:08:42.389 "mibps": 1743.3777376973119, 00:08:42.389 "io_failed": 1, 00:08:42.389 "io_timeout": 0, 00:08:42.389 "avg_latency_us": 98.3579851785111, 00:08:42.389 "min_latency_us": 33.28, 00:08:42.389 "max_latency_us": 1714.0184615384615 00:08:42.389 } 00:08:42.389 ], 00:08:42.389 "core_count": 1 00:08:42.389 } 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71000 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71000 ']' 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71000 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71000 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.389 killing process with pid 71000 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71000' 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71000 00:08:42.389 [2024-11-26 19:49:33.183800] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.389 19:49:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71000 00:08:42.647 [2024-11-26 19:49:33.391661] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.213 19:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:43.213 19:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tOfYbFTsQj 00:08:43.213 19:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:43.213 19:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.82 00:08:43.213 19:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:43.213 19:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:43.213 19:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:43.213 19:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.82 != \0\.\0\0 ]] 00:08:43.213 00:08:43.213 real 0m3.629s 00:08:43.213 user 0m4.202s 00:08:43.213 sys 0m0.485s 00:08:43.213 19:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.213 ************************************ 00:08:43.213 END TEST raid_read_error_test 00:08:43.213 ************************************ 00:08:43.213 19:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.213 19:49:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:08:43.213 19:49:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:43.213 19:49:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.213 19:49:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.213 ************************************ 00:08:43.213 START TEST raid_write_error_test 00:08:43.213 ************************************ 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:43.213 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VcIBDLyQE4 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71134 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71134 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71134 ']' 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.214 19:49:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.472 [2024-11-26 19:49:34.152253] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:43.472 [2024-11-26 19:49:34.152382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71134 ] 00:08:43.472 [2024-11-26 19:49:34.306381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.728 [2024-11-26 19:49:34.407905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.728 [2024-11-26 19:49:34.528149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.728 [2024-11-26 19:49:34.528191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.293 19:49:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.293 19:49:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:44.293 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.293 19:49:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:44.293 19:49:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.293 19:49:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.293 BaseBdev1_malloc 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.293 true 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.293 [2024-11-26 19:49:35.034401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:44.293 [2024-11-26 19:49:35.034451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.293 [2024-11-26 19:49:35.034468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:44.293 [2024-11-26 19:49:35.034477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.293 [2024-11-26 19:49:35.036454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.293 [2024-11-26 19:49:35.036493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:44.293 BaseBdev1 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.293 BaseBdev2_malloc 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.293 true 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.293 [2024-11-26 19:49:35.076181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:44.293 [2024-11-26 19:49:35.076226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.293 [2024-11-26 19:49:35.076240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:44.293 [2024-11-26 19:49:35.076249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.293 [2024-11-26 19:49:35.078078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.293 [2024-11-26 19:49:35.078109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:44.293 BaseBdev2 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.293 BaseBdev3_malloc 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.293 true 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.293 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.293 [2024-11-26 19:49:35.132628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:44.293 [2024-11-26 19:49:35.132670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.293 [2024-11-26 19:49:35.132684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:44.293 [2024-11-26 19:49:35.132693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.293 [2024-11-26 19:49:35.134503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.293 [2024-11-26 19:49:35.134533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:44.293 BaseBdev3 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.294 BaseBdev4_malloc 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.294 true 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.294 [2024-11-26 19:49:35.173966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:08:44.294 [2024-11-26 19:49:35.174006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.294 [2024-11-26 19:49:35.174021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:44.294 [2024-11-26 19:49:35.174030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.294 [2024-11-26 19:49:35.175933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.294 [2024-11-26 19:49:35.175966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:08:44.294 BaseBdev4 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.294 [2024-11-26 19:49:35.182013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.294 [2024-11-26 19:49:35.183607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.294 [2024-11-26 19:49:35.183673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:44.294 [2024-11-26 19:49:35.183726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:44.294 [2024-11-26 19:49:35.183909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:08:44.294 [2024-11-26 19:49:35.183922] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:44.294 [2024-11-26 19:49:35.184118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:08:44.294 [2024-11-26 19:49:35.184242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:08:44.294 [2024-11-26 19:49:35.184250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:08:44.294 [2024-11-26 19:49:35.184378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.294 "name": "raid_bdev1", 00:08:44.294 "uuid": "dbc8d240-0a6c-4633-bcb8-aee50098b627", 00:08:44.294 "strip_size_kb": 64, 00:08:44.294 "state": "online", 00:08:44.294 "raid_level": "concat", 00:08:44.294 "superblock": true, 00:08:44.294 "num_base_bdevs": 4, 00:08:44.294 "num_base_bdevs_discovered": 4, 00:08:44.294 "num_base_bdevs_operational": 4, 00:08:44.294 "base_bdevs_list": [ 00:08:44.294 { 00:08:44.294 "name": "BaseBdev1", 00:08:44.294 "uuid": "cd084e25-2f96-56d7-bf39-6799c582af9c", 00:08:44.294 "is_configured": true, 00:08:44.294 "data_offset": 2048, 00:08:44.294 "data_size": 63488 00:08:44.294 }, 00:08:44.294 { 00:08:44.294 "name": "BaseBdev2", 00:08:44.294 "uuid": "0ee6114d-b455-5097-b5e4-29242a8a0de9", 00:08:44.294 "is_configured": true, 00:08:44.294 "data_offset": 2048, 00:08:44.294 "data_size": 63488 00:08:44.294 }, 00:08:44.294 { 00:08:44.294 "name": "BaseBdev3", 00:08:44.294 "uuid": "abf0ab63-5a42-5c8f-b777-5ae1d03c8210", 00:08:44.294 "is_configured": true, 00:08:44.294 "data_offset": 2048, 00:08:44.294 "data_size": 63488 00:08:44.294 }, 00:08:44.294 { 00:08:44.294 "name": "BaseBdev4", 00:08:44.294 "uuid": "ab99ebfa-6600-5a29-8c07-aaad48d5ce28", 00:08:44.294 "is_configured": true, 00:08:44.294 "data_offset": 2048, 00:08:44.294 "data_size": 63488 00:08:44.294 } 00:08:44.294 ] 00:08:44.294 }' 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.294 19:49:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.859 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:44.859 19:49:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:44.859 [2024-11-26 19:49:35.582961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.793 "name": "raid_bdev1", 00:08:45.793 "uuid": "dbc8d240-0a6c-4633-bcb8-aee50098b627", 00:08:45.793 "strip_size_kb": 64, 00:08:45.793 "state": "online", 00:08:45.793 "raid_level": "concat", 00:08:45.793 "superblock": true, 00:08:45.793 "num_base_bdevs": 4, 00:08:45.793 "num_base_bdevs_discovered": 4, 00:08:45.793 "num_base_bdevs_operational": 4, 00:08:45.793 "base_bdevs_list": [ 00:08:45.793 { 00:08:45.793 "name": "BaseBdev1", 00:08:45.793 "uuid": "cd084e25-2f96-56d7-bf39-6799c582af9c", 00:08:45.793 "is_configured": true, 00:08:45.793 "data_offset": 2048, 00:08:45.793 "data_size": 63488 00:08:45.793 }, 00:08:45.793 { 00:08:45.793 "name": "BaseBdev2", 00:08:45.793 "uuid": "0ee6114d-b455-5097-b5e4-29242a8a0de9", 00:08:45.793 "is_configured": true, 00:08:45.793 "data_offset": 2048, 00:08:45.793 "data_size": 63488 00:08:45.793 }, 00:08:45.793 { 00:08:45.793 "name": "BaseBdev3", 00:08:45.793 "uuid": "abf0ab63-5a42-5c8f-b777-5ae1d03c8210", 00:08:45.793 "is_configured": true, 00:08:45.793 "data_offset": 2048, 00:08:45.793 "data_size": 63488 00:08:45.793 }, 00:08:45.793 { 00:08:45.793 "name": "BaseBdev4", 00:08:45.793 "uuid": "ab99ebfa-6600-5a29-8c07-aaad48d5ce28", 00:08:45.793 "is_configured": true, 00:08:45.793 "data_offset": 2048, 00:08:45.793 "data_size": 63488 00:08:45.793 } 00:08:45.793 ] 00:08:45.793 }' 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.793 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.052 [2024-11-26 19:49:36.844283] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.052 [2024-11-26 19:49:36.844319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.052 [2024-11-26 19:49:36.846746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.052 [2024-11-26 19:49:36.846815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.052 [2024-11-26 19:49:36.846855] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.052 [2024-11-26 19:49:36.846868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.052 { 00:08:46.052 "results": [ 00:08:46.052 { 00:08:46.052 "job": "raid_bdev1", 00:08:46.052 "core_mask": "0x1", 00:08:46.052 "workload": "randrw", 00:08:46.052 "percentage": 50, 00:08:46.052 "status": "finished", 00:08:46.052 "queue_depth": 1, 00:08:46.052 "io_size": 131072, 00:08:46.052 "runtime": 1.259634, 00:08:46.052 "iops": 16996.206834683726, 00:08:46.052 "mibps": 2124.5258543354657, 00:08:46.052 "io_failed": 1, 00:08:46.052 "io_timeout": 0, 00:08:46.052 "avg_latency_us": 81.07307440807675, 00:08:46.052 "min_latency_us": 25.6, 00:08:46.052 "max_latency_us": 1323.323076923077 00:08:46.052 } 00:08:46.052 ], 00:08:46.052 "core_count": 1 00:08:46.052 } 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71134 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71134 ']' 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71134 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71134 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.052 killing process with pid 71134 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71134' 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71134 00:08:46.052 [2024-11-26 19:49:36.876312] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.052 19:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71134 00:08:46.310 [2024-11-26 19:49:37.044241] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.878 19:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:46.878 19:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VcIBDLyQE4 00:08:46.878 19:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:46.878 19:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:08:46.878 19:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:46.878 19:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.878 19:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:46.878 19:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:08:46.878 00:08:46.878 real 0m3.620s 00:08:46.878 user 0m4.299s 00:08:46.878 sys 0m0.432s 00:08:46.878 19:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.878 19:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.878 ************************************ 00:08:46.878 END TEST raid_write_error_test 00:08:46.878 ************************************ 00:08:46.878 19:49:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:46.878 19:49:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:08:46.878 19:49:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:46.878 19:49:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.878 19:49:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.878 ************************************ 00:08:46.878 START TEST raid_state_function_test 00:08:46.878 ************************************ 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71267 00:08:46.878 Process raid pid: 71267 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71267' 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71267 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71267 ']' 00:08:46.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.878 19:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:46.878 [2024-11-26 19:49:37.800455] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:46.878 [2024-11-26 19:49:37.800560] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.136 [2024-11-26 19:49:37.950244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.136 [2024-11-26 19:49:38.051898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.394 [2024-11-26 19:49:38.174179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.394 [2024-11-26 19:49:38.174218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.961 [2024-11-26 19:49:38.652537] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.961 [2024-11-26 19:49:38.652587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.961 [2024-11-26 19:49:38.652596] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.961 [2024-11-26 19:49:38.652604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.961 [2024-11-26 19:49:38.652609] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.961 [2024-11-26 19:49:38.652616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.961 [2024-11-26 19:49:38.652621] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:47.961 [2024-11-26 19:49:38.652627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.961 "name": "Existed_Raid", 00:08:47.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.961 "strip_size_kb": 0, 00:08:47.961 "state": "configuring", 00:08:47.961 "raid_level": "raid1", 00:08:47.961 "superblock": false, 00:08:47.961 "num_base_bdevs": 4, 00:08:47.961 "num_base_bdevs_discovered": 0, 00:08:47.961 "num_base_bdevs_operational": 4, 00:08:47.961 "base_bdevs_list": [ 00:08:47.961 { 00:08:47.961 "name": "BaseBdev1", 00:08:47.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.961 "is_configured": false, 00:08:47.961 "data_offset": 0, 00:08:47.961 "data_size": 0 00:08:47.961 }, 00:08:47.961 { 00:08:47.961 "name": "BaseBdev2", 00:08:47.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.961 "is_configured": false, 00:08:47.961 "data_offset": 0, 00:08:47.961 "data_size": 0 00:08:47.961 }, 00:08:47.961 { 00:08:47.961 "name": "BaseBdev3", 00:08:47.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.961 "is_configured": false, 00:08:47.961 "data_offset": 0, 00:08:47.961 "data_size": 0 00:08:47.961 }, 00:08:47.961 { 00:08:47.961 "name": "BaseBdev4", 00:08:47.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.961 "is_configured": false, 00:08:47.961 "data_offset": 0, 00:08:47.961 "data_size": 0 00:08:47.961 } 00:08:47.961 ] 00:08:47.961 }' 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.961 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.220 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.220 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.220 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.220 [2024-11-26 19:49:38.976555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.220 [2024-11-26 19:49:38.976592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:48.220 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.220 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:48.220 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.220 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.220 [2024-11-26 19:49:38.984550] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.220 [2024-11-26 19:49:38.984587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.220 [2024-11-26 19:49:38.984594] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.220 [2024-11-26 19:49:38.984601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.220 [2024-11-26 19:49:38.984606] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.220 [2024-11-26 19:49:38.984614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.220 [2024-11-26 19:49:38.984619] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:48.220 [2024-11-26 19:49:38.984626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:48.220 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.220 19:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:48.220 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.220 19:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.220 [2024-11-26 19:49:39.014314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.220 BaseBdev1 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.220 [ 00:08:48.220 { 00:08:48.220 "name": "BaseBdev1", 00:08:48.220 "aliases": [ 00:08:48.220 "239987a5-e204-4cef-bb53-61d056fe24a4" 00:08:48.220 ], 00:08:48.220 "product_name": "Malloc disk", 00:08:48.220 "block_size": 512, 00:08:48.220 "num_blocks": 65536, 00:08:48.220 "uuid": "239987a5-e204-4cef-bb53-61d056fe24a4", 00:08:48.220 "assigned_rate_limits": { 00:08:48.220 "rw_ios_per_sec": 0, 00:08:48.220 "rw_mbytes_per_sec": 0, 00:08:48.220 "r_mbytes_per_sec": 0, 00:08:48.220 "w_mbytes_per_sec": 0 00:08:48.220 }, 00:08:48.220 "claimed": true, 00:08:48.220 "claim_type": "exclusive_write", 00:08:48.220 "zoned": false, 00:08:48.220 "supported_io_types": { 00:08:48.220 "read": true, 00:08:48.220 "write": true, 00:08:48.220 "unmap": true, 00:08:48.220 "flush": true, 00:08:48.220 "reset": true, 00:08:48.220 "nvme_admin": false, 00:08:48.220 "nvme_io": false, 00:08:48.220 "nvme_io_md": false, 00:08:48.220 "write_zeroes": true, 00:08:48.220 "zcopy": true, 00:08:48.220 "get_zone_info": false, 00:08:48.220 "zone_management": false, 00:08:48.220 "zone_append": false, 00:08:48.220 "compare": false, 00:08:48.220 "compare_and_write": false, 00:08:48.220 "abort": true, 00:08:48.220 "seek_hole": false, 00:08:48.220 "seek_data": false, 00:08:48.220 "copy": true, 00:08:48.220 "nvme_iov_md": false 00:08:48.220 }, 00:08:48.220 "memory_domains": [ 00:08:48.220 { 00:08:48.220 "dma_device_id": "system", 00:08:48.220 "dma_device_type": 1 00:08:48.220 }, 00:08:48.220 { 00:08:48.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.220 "dma_device_type": 2 00:08:48.220 } 00:08:48.220 ], 00:08:48.220 "driver_specific": {} 00:08:48.220 } 00:08:48.220 ] 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.220 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.221 "name": "Existed_Raid", 00:08:48.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.221 "strip_size_kb": 0, 00:08:48.221 "state": "configuring", 00:08:48.221 "raid_level": "raid1", 00:08:48.221 "superblock": false, 00:08:48.221 "num_base_bdevs": 4, 00:08:48.221 "num_base_bdevs_discovered": 1, 00:08:48.221 "num_base_bdevs_operational": 4, 00:08:48.221 "base_bdevs_list": [ 00:08:48.221 { 00:08:48.221 "name": "BaseBdev1", 00:08:48.221 "uuid": "239987a5-e204-4cef-bb53-61d056fe24a4", 00:08:48.221 "is_configured": true, 00:08:48.221 "data_offset": 0, 00:08:48.221 "data_size": 65536 00:08:48.221 }, 00:08:48.221 { 00:08:48.221 "name": "BaseBdev2", 00:08:48.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.221 "is_configured": false, 00:08:48.221 "data_offset": 0, 00:08:48.221 "data_size": 0 00:08:48.221 }, 00:08:48.221 { 00:08:48.221 "name": "BaseBdev3", 00:08:48.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.221 "is_configured": false, 00:08:48.221 "data_offset": 0, 00:08:48.221 "data_size": 0 00:08:48.221 }, 00:08:48.221 { 00:08:48.221 "name": "BaseBdev4", 00:08:48.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.221 "is_configured": false, 00:08:48.221 "data_offset": 0, 00:08:48.221 "data_size": 0 00:08:48.221 } 00:08:48.221 ] 00:08:48.221 }' 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.221 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.477 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.477 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.477 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.477 [2024-11-26 19:49:39.342417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.477 [2024-11-26 19:49:39.342973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:48.477 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.477 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:48.477 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.478 [2024-11-26 19:49:39.350463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.478 [2024-11-26 19:49:39.352162] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.478 [2024-11-26 19:49:39.352273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.478 [2024-11-26 19:49:39.352326] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.478 [2024-11-26 19:49:39.352361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.478 [2024-11-26 19:49:39.352459] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:48.478 [2024-11-26 19:49:39.352491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.478 "name": "Existed_Raid", 00:08:48.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.478 "strip_size_kb": 0, 00:08:48.478 "state": "configuring", 00:08:48.478 "raid_level": "raid1", 00:08:48.478 "superblock": false, 00:08:48.478 "num_base_bdevs": 4, 00:08:48.478 "num_base_bdevs_discovered": 1, 00:08:48.478 "num_base_bdevs_operational": 4, 00:08:48.478 "base_bdevs_list": [ 00:08:48.478 { 00:08:48.478 "name": "BaseBdev1", 00:08:48.478 "uuid": "239987a5-e204-4cef-bb53-61d056fe24a4", 00:08:48.478 "is_configured": true, 00:08:48.478 "data_offset": 0, 00:08:48.478 "data_size": 65536 00:08:48.478 }, 00:08:48.478 { 00:08:48.478 "name": "BaseBdev2", 00:08:48.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.478 "is_configured": false, 00:08:48.478 "data_offset": 0, 00:08:48.478 "data_size": 0 00:08:48.478 }, 00:08:48.478 { 00:08:48.478 "name": "BaseBdev3", 00:08:48.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.478 "is_configured": false, 00:08:48.478 "data_offset": 0, 00:08:48.478 "data_size": 0 00:08:48.478 }, 00:08:48.478 { 00:08:48.478 "name": "BaseBdev4", 00:08:48.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.478 "is_configured": false, 00:08:48.478 "data_offset": 0, 00:08:48.478 "data_size": 0 00:08:48.478 } 00:08:48.478 ] 00:08:48.478 }' 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.478 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.734 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.734 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.734 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.994 [2024-11-26 19:49:39.674917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.994 BaseBdev2 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.994 [ 00:08:48.994 { 00:08:48.994 "name": "BaseBdev2", 00:08:48.994 "aliases": [ 00:08:48.994 "fc49923f-7c72-40e7-9cdd-311853bae265" 00:08:48.994 ], 00:08:48.994 "product_name": "Malloc disk", 00:08:48.994 "block_size": 512, 00:08:48.994 "num_blocks": 65536, 00:08:48.994 "uuid": "fc49923f-7c72-40e7-9cdd-311853bae265", 00:08:48.994 "assigned_rate_limits": { 00:08:48.994 "rw_ios_per_sec": 0, 00:08:48.994 "rw_mbytes_per_sec": 0, 00:08:48.994 "r_mbytes_per_sec": 0, 00:08:48.994 "w_mbytes_per_sec": 0 00:08:48.994 }, 00:08:48.994 "claimed": true, 00:08:48.994 "claim_type": "exclusive_write", 00:08:48.994 "zoned": false, 00:08:48.994 "supported_io_types": { 00:08:48.994 "read": true, 00:08:48.994 "write": true, 00:08:48.994 "unmap": true, 00:08:48.994 "flush": true, 00:08:48.994 "reset": true, 00:08:48.994 "nvme_admin": false, 00:08:48.994 "nvme_io": false, 00:08:48.994 "nvme_io_md": false, 00:08:48.994 "write_zeroes": true, 00:08:48.994 "zcopy": true, 00:08:48.994 "get_zone_info": false, 00:08:48.994 "zone_management": false, 00:08:48.994 "zone_append": false, 00:08:48.994 "compare": false, 00:08:48.994 "compare_and_write": false, 00:08:48.994 "abort": true, 00:08:48.994 "seek_hole": false, 00:08:48.994 "seek_data": false, 00:08:48.994 "copy": true, 00:08:48.994 "nvme_iov_md": false 00:08:48.994 }, 00:08:48.994 "memory_domains": [ 00:08:48.994 { 00:08:48.994 "dma_device_id": "system", 00:08:48.994 "dma_device_type": 1 00:08:48.994 }, 00:08:48.994 { 00:08:48.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.994 "dma_device_type": 2 00:08:48.994 } 00:08:48.994 ], 00:08:48.994 "driver_specific": {} 00:08:48.994 } 00:08:48.994 ] 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.994 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.994 "name": "Existed_Raid", 00:08:48.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.994 "strip_size_kb": 0, 00:08:48.994 "state": "configuring", 00:08:48.994 "raid_level": "raid1", 00:08:48.994 "superblock": false, 00:08:48.994 "num_base_bdevs": 4, 00:08:48.994 "num_base_bdevs_discovered": 2, 00:08:48.994 "num_base_bdevs_operational": 4, 00:08:48.994 "base_bdevs_list": [ 00:08:48.994 { 00:08:48.994 "name": "BaseBdev1", 00:08:48.994 "uuid": "239987a5-e204-4cef-bb53-61d056fe24a4", 00:08:48.994 "is_configured": true, 00:08:48.994 "data_offset": 0, 00:08:48.994 "data_size": 65536 00:08:48.994 }, 00:08:48.994 { 00:08:48.994 "name": "BaseBdev2", 00:08:48.994 "uuid": "fc49923f-7c72-40e7-9cdd-311853bae265", 00:08:48.994 "is_configured": true, 00:08:48.994 "data_offset": 0, 00:08:48.994 "data_size": 65536 00:08:48.994 }, 00:08:48.994 { 00:08:48.994 "name": "BaseBdev3", 00:08:48.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.994 "is_configured": false, 00:08:48.994 "data_offset": 0, 00:08:48.994 "data_size": 0 00:08:48.994 }, 00:08:48.994 { 00:08:48.994 "name": "BaseBdev4", 00:08:48.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.994 "is_configured": false, 00:08:48.994 "data_offset": 0, 00:08:48.994 "data_size": 0 00:08:48.994 } 00:08:48.994 ] 00:08:48.994 }' 00:08:48.995 19:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.995 19:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.253 [2024-11-26 19:49:40.055746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.253 BaseBdev3 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.253 [ 00:08:49.253 { 00:08:49.253 "name": "BaseBdev3", 00:08:49.253 "aliases": [ 00:08:49.253 "86060826-cf93-4fd4-bbf3-e232fa923bac" 00:08:49.253 ], 00:08:49.253 "product_name": "Malloc disk", 00:08:49.253 "block_size": 512, 00:08:49.253 "num_blocks": 65536, 00:08:49.253 "uuid": "86060826-cf93-4fd4-bbf3-e232fa923bac", 00:08:49.253 "assigned_rate_limits": { 00:08:49.253 "rw_ios_per_sec": 0, 00:08:49.253 "rw_mbytes_per_sec": 0, 00:08:49.253 "r_mbytes_per_sec": 0, 00:08:49.253 "w_mbytes_per_sec": 0 00:08:49.253 }, 00:08:49.253 "claimed": true, 00:08:49.253 "claim_type": "exclusive_write", 00:08:49.253 "zoned": false, 00:08:49.253 "supported_io_types": { 00:08:49.253 "read": true, 00:08:49.253 "write": true, 00:08:49.253 "unmap": true, 00:08:49.253 "flush": true, 00:08:49.253 "reset": true, 00:08:49.253 "nvme_admin": false, 00:08:49.253 "nvme_io": false, 00:08:49.253 "nvme_io_md": false, 00:08:49.253 "write_zeroes": true, 00:08:49.253 "zcopy": true, 00:08:49.253 "get_zone_info": false, 00:08:49.253 "zone_management": false, 00:08:49.253 "zone_append": false, 00:08:49.253 "compare": false, 00:08:49.253 "compare_and_write": false, 00:08:49.253 "abort": true, 00:08:49.253 "seek_hole": false, 00:08:49.253 "seek_data": false, 00:08:49.253 "copy": true, 00:08:49.253 "nvme_iov_md": false 00:08:49.253 }, 00:08:49.253 "memory_domains": [ 00:08:49.253 { 00:08:49.253 "dma_device_id": "system", 00:08:49.253 "dma_device_type": 1 00:08:49.253 }, 00:08:49.253 { 00:08:49.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.253 "dma_device_type": 2 00:08:49.253 } 00:08:49.253 ], 00:08:49.253 "driver_specific": {} 00:08:49.253 } 00:08:49.253 ] 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.253 "name": "Existed_Raid", 00:08:49.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.253 "strip_size_kb": 0, 00:08:49.253 "state": "configuring", 00:08:49.253 "raid_level": "raid1", 00:08:49.253 "superblock": false, 00:08:49.253 "num_base_bdevs": 4, 00:08:49.253 "num_base_bdevs_discovered": 3, 00:08:49.253 "num_base_bdevs_operational": 4, 00:08:49.253 "base_bdevs_list": [ 00:08:49.253 { 00:08:49.253 "name": "BaseBdev1", 00:08:49.253 "uuid": "239987a5-e204-4cef-bb53-61d056fe24a4", 00:08:49.253 "is_configured": true, 00:08:49.253 "data_offset": 0, 00:08:49.253 "data_size": 65536 00:08:49.253 }, 00:08:49.253 { 00:08:49.253 "name": "BaseBdev2", 00:08:49.253 "uuid": "fc49923f-7c72-40e7-9cdd-311853bae265", 00:08:49.253 "is_configured": true, 00:08:49.253 "data_offset": 0, 00:08:49.253 "data_size": 65536 00:08:49.253 }, 00:08:49.253 { 00:08:49.253 "name": "BaseBdev3", 00:08:49.253 "uuid": "86060826-cf93-4fd4-bbf3-e232fa923bac", 00:08:49.253 "is_configured": true, 00:08:49.253 "data_offset": 0, 00:08:49.253 "data_size": 65536 00:08:49.253 }, 00:08:49.253 { 00:08:49.253 "name": "BaseBdev4", 00:08:49.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.253 "is_configured": false, 00:08:49.253 "data_offset": 0, 00:08:49.253 "data_size": 0 00:08:49.253 } 00:08:49.253 ] 00:08:49.253 }' 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.253 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.513 [2024-11-26 19:49:40.420565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:49.513 [2024-11-26 19:49:40.420612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:49.513 [2024-11-26 19:49:40.420622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:49.513 [2024-11-26 19:49:40.420864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:49.513 [2024-11-26 19:49:40.421016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:49.513 [2024-11-26 19:49:40.421027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:49.513 [2024-11-26 19:49:40.421262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.513 BaseBdev4 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.513 [ 00:08:49.513 { 00:08:49.513 "name": "BaseBdev4", 00:08:49.513 "aliases": [ 00:08:49.513 "8fde1e0f-fee3-4067-99ed-6de8f5f36820" 00:08:49.513 ], 00:08:49.513 "product_name": "Malloc disk", 00:08:49.513 "block_size": 512, 00:08:49.513 "num_blocks": 65536, 00:08:49.513 "uuid": "8fde1e0f-fee3-4067-99ed-6de8f5f36820", 00:08:49.513 "assigned_rate_limits": { 00:08:49.513 "rw_ios_per_sec": 0, 00:08:49.513 "rw_mbytes_per_sec": 0, 00:08:49.513 "r_mbytes_per_sec": 0, 00:08:49.513 "w_mbytes_per_sec": 0 00:08:49.513 }, 00:08:49.513 "claimed": true, 00:08:49.513 "claim_type": "exclusive_write", 00:08:49.513 "zoned": false, 00:08:49.513 "supported_io_types": { 00:08:49.513 "read": true, 00:08:49.513 "write": true, 00:08:49.513 "unmap": true, 00:08:49.513 "flush": true, 00:08:49.513 "reset": true, 00:08:49.513 "nvme_admin": false, 00:08:49.513 "nvme_io": false, 00:08:49.513 "nvme_io_md": false, 00:08:49.513 "write_zeroes": true, 00:08:49.513 "zcopy": true, 00:08:49.513 "get_zone_info": false, 00:08:49.513 "zone_management": false, 00:08:49.513 "zone_append": false, 00:08:49.513 "compare": false, 00:08:49.513 "compare_and_write": false, 00:08:49.513 "abort": true, 00:08:49.513 "seek_hole": false, 00:08:49.513 "seek_data": false, 00:08:49.513 "copy": true, 00:08:49.513 "nvme_iov_md": false 00:08:49.513 }, 00:08:49.513 "memory_domains": [ 00:08:49.513 { 00:08:49.513 "dma_device_id": "system", 00:08:49.513 "dma_device_type": 1 00:08:49.513 }, 00:08:49.513 { 00:08:49.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.513 "dma_device_type": 2 00:08:49.513 } 00:08:49.513 ], 00:08:49.513 "driver_specific": {} 00:08:49.513 } 00:08:49.513 ] 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.513 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.514 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.514 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.772 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.772 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.772 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.772 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.772 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.772 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.772 "name": "Existed_Raid", 00:08:49.772 "uuid": "be764c19-fec1-4775-8e56-c70fae81717b", 00:08:49.772 "strip_size_kb": 0, 00:08:49.772 "state": "online", 00:08:49.772 "raid_level": "raid1", 00:08:49.772 "superblock": false, 00:08:49.772 "num_base_bdevs": 4, 00:08:49.772 "num_base_bdevs_discovered": 4, 00:08:49.772 "num_base_bdevs_operational": 4, 00:08:49.772 "base_bdevs_list": [ 00:08:49.772 { 00:08:49.772 "name": "BaseBdev1", 00:08:49.772 "uuid": "239987a5-e204-4cef-bb53-61d056fe24a4", 00:08:49.772 "is_configured": true, 00:08:49.772 "data_offset": 0, 00:08:49.772 "data_size": 65536 00:08:49.772 }, 00:08:49.772 { 00:08:49.772 "name": "BaseBdev2", 00:08:49.772 "uuid": "fc49923f-7c72-40e7-9cdd-311853bae265", 00:08:49.772 "is_configured": true, 00:08:49.772 "data_offset": 0, 00:08:49.772 "data_size": 65536 00:08:49.772 }, 00:08:49.772 { 00:08:49.772 "name": "BaseBdev3", 00:08:49.772 "uuid": "86060826-cf93-4fd4-bbf3-e232fa923bac", 00:08:49.772 "is_configured": true, 00:08:49.772 "data_offset": 0, 00:08:49.772 "data_size": 65536 00:08:49.772 }, 00:08:49.772 { 00:08:49.772 "name": "BaseBdev4", 00:08:49.772 "uuid": "8fde1e0f-fee3-4067-99ed-6de8f5f36820", 00:08:49.772 "is_configured": true, 00:08:49.772 "data_offset": 0, 00:08:49.772 "data_size": 65536 00:08:49.772 } 00:08:49.772 ] 00:08:49.772 }' 00:08:49.772 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.772 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.031 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:50.031 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:50.031 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.031 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.031 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.031 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.031 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:50.031 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.031 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.031 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.031 [2024-11-26 19:49:40.768990] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.031 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.031 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.031 "name": "Existed_Raid", 00:08:50.031 "aliases": [ 00:08:50.031 "be764c19-fec1-4775-8e56-c70fae81717b" 00:08:50.031 ], 00:08:50.031 "product_name": "Raid Volume", 00:08:50.032 "block_size": 512, 00:08:50.032 "num_blocks": 65536, 00:08:50.032 "uuid": "be764c19-fec1-4775-8e56-c70fae81717b", 00:08:50.032 "assigned_rate_limits": { 00:08:50.032 "rw_ios_per_sec": 0, 00:08:50.032 "rw_mbytes_per_sec": 0, 00:08:50.032 "r_mbytes_per_sec": 0, 00:08:50.032 "w_mbytes_per_sec": 0 00:08:50.032 }, 00:08:50.032 "claimed": false, 00:08:50.032 "zoned": false, 00:08:50.032 "supported_io_types": { 00:08:50.032 "read": true, 00:08:50.032 "write": true, 00:08:50.032 "unmap": false, 00:08:50.032 "flush": false, 00:08:50.032 "reset": true, 00:08:50.032 "nvme_admin": false, 00:08:50.032 "nvme_io": false, 00:08:50.032 "nvme_io_md": false, 00:08:50.032 "write_zeroes": true, 00:08:50.032 "zcopy": false, 00:08:50.032 "get_zone_info": false, 00:08:50.032 "zone_management": false, 00:08:50.032 "zone_append": false, 00:08:50.032 "compare": false, 00:08:50.032 "compare_and_write": false, 00:08:50.032 "abort": false, 00:08:50.032 "seek_hole": false, 00:08:50.032 "seek_data": false, 00:08:50.032 "copy": false, 00:08:50.032 "nvme_iov_md": false 00:08:50.032 }, 00:08:50.032 "memory_domains": [ 00:08:50.032 { 00:08:50.032 "dma_device_id": "system", 00:08:50.032 "dma_device_type": 1 00:08:50.032 }, 00:08:50.032 { 00:08:50.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.032 "dma_device_type": 2 00:08:50.032 }, 00:08:50.032 { 00:08:50.032 "dma_device_id": "system", 00:08:50.032 "dma_device_type": 1 00:08:50.032 }, 00:08:50.032 { 00:08:50.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.032 "dma_device_type": 2 00:08:50.032 }, 00:08:50.032 { 00:08:50.032 "dma_device_id": "system", 00:08:50.032 "dma_device_type": 1 00:08:50.032 }, 00:08:50.032 { 00:08:50.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.032 "dma_device_type": 2 00:08:50.032 }, 00:08:50.032 { 00:08:50.032 "dma_device_id": "system", 00:08:50.032 "dma_device_type": 1 00:08:50.032 }, 00:08:50.032 { 00:08:50.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.032 "dma_device_type": 2 00:08:50.032 } 00:08:50.032 ], 00:08:50.032 "driver_specific": { 00:08:50.032 "raid": { 00:08:50.032 "uuid": "be764c19-fec1-4775-8e56-c70fae81717b", 00:08:50.032 "strip_size_kb": 0, 00:08:50.032 "state": "online", 00:08:50.032 "raid_level": "raid1", 00:08:50.032 "superblock": false, 00:08:50.032 "num_base_bdevs": 4, 00:08:50.032 "num_base_bdevs_discovered": 4, 00:08:50.032 "num_base_bdevs_operational": 4, 00:08:50.032 "base_bdevs_list": [ 00:08:50.032 { 00:08:50.032 "name": "BaseBdev1", 00:08:50.032 "uuid": "239987a5-e204-4cef-bb53-61d056fe24a4", 00:08:50.032 "is_configured": true, 00:08:50.032 "data_offset": 0, 00:08:50.032 "data_size": 65536 00:08:50.032 }, 00:08:50.032 { 00:08:50.032 "name": "BaseBdev2", 00:08:50.032 "uuid": "fc49923f-7c72-40e7-9cdd-311853bae265", 00:08:50.032 "is_configured": true, 00:08:50.032 "data_offset": 0, 00:08:50.032 "data_size": 65536 00:08:50.032 }, 00:08:50.032 { 00:08:50.032 "name": "BaseBdev3", 00:08:50.032 "uuid": "86060826-cf93-4fd4-bbf3-e232fa923bac", 00:08:50.032 "is_configured": true, 00:08:50.032 "data_offset": 0, 00:08:50.032 "data_size": 65536 00:08:50.032 }, 00:08:50.032 { 00:08:50.032 "name": "BaseBdev4", 00:08:50.032 "uuid": "8fde1e0f-fee3-4067-99ed-6de8f5f36820", 00:08:50.032 "is_configured": true, 00:08:50.032 "data_offset": 0, 00:08:50.032 "data_size": 65536 00:08:50.032 } 00:08:50.032 ] 00:08:50.032 } 00:08:50.032 } 00:08:50.032 }' 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:50.032 BaseBdev2 00:08:50.032 BaseBdev3 00:08:50.032 BaseBdev4' 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.032 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.291 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.291 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.291 19:49:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:50.291 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.291 19:49:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.291 [2024-11-26 19:49:40.980756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.291 "name": "Existed_Raid", 00:08:50.291 "uuid": "be764c19-fec1-4775-8e56-c70fae81717b", 00:08:50.291 "strip_size_kb": 0, 00:08:50.291 "state": "online", 00:08:50.291 "raid_level": "raid1", 00:08:50.291 "superblock": false, 00:08:50.291 "num_base_bdevs": 4, 00:08:50.291 "num_base_bdevs_discovered": 3, 00:08:50.291 "num_base_bdevs_operational": 3, 00:08:50.291 "base_bdevs_list": [ 00:08:50.291 { 00:08:50.291 "name": null, 00:08:50.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.291 "is_configured": false, 00:08:50.291 "data_offset": 0, 00:08:50.291 "data_size": 65536 00:08:50.291 }, 00:08:50.291 { 00:08:50.291 "name": "BaseBdev2", 00:08:50.291 "uuid": "fc49923f-7c72-40e7-9cdd-311853bae265", 00:08:50.291 "is_configured": true, 00:08:50.291 "data_offset": 0, 00:08:50.291 "data_size": 65536 00:08:50.291 }, 00:08:50.291 { 00:08:50.291 "name": "BaseBdev3", 00:08:50.291 "uuid": "86060826-cf93-4fd4-bbf3-e232fa923bac", 00:08:50.291 "is_configured": true, 00:08:50.291 "data_offset": 0, 00:08:50.291 "data_size": 65536 00:08:50.291 }, 00:08:50.291 { 00:08:50.291 "name": "BaseBdev4", 00:08:50.291 "uuid": "8fde1e0f-fee3-4067-99ed-6de8f5f36820", 00:08:50.291 "is_configured": true, 00:08:50.291 "data_offset": 0, 00:08:50.291 "data_size": 65536 00:08:50.291 } 00:08:50.291 ] 00:08:50.291 }' 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.291 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.548 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:50.548 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.549 [2024-11-26 19:49:41.357688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.549 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.549 [2024-11-26 19:49:41.446856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.808 [2024-11-26 19:49:41.532446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:50.808 [2024-11-26 19:49:41.532531] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.808 [2024-11-26 19:49:41.581207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.808 [2024-11-26 19:49:41.581252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.808 [2024-11-26 19:49:41.581262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.808 BaseBdev2 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.808 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.808 [ 00:08:50.808 { 00:08:50.808 "name": "BaseBdev2", 00:08:50.808 "aliases": [ 00:08:50.808 "cd8c6a8e-b66e-43b4-b156-81023c1c21eb" 00:08:50.808 ], 00:08:50.808 "product_name": "Malloc disk", 00:08:50.808 "block_size": 512, 00:08:50.808 "num_blocks": 65536, 00:08:50.808 "uuid": "cd8c6a8e-b66e-43b4-b156-81023c1c21eb", 00:08:50.808 "assigned_rate_limits": { 00:08:50.808 "rw_ios_per_sec": 0, 00:08:50.808 "rw_mbytes_per_sec": 0, 00:08:50.808 "r_mbytes_per_sec": 0, 00:08:50.808 "w_mbytes_per_sec": 0 00:08:50.809 }, 00:08:50.809 "claimed": false, 00:08:50.809 "zoned": false, 00:08:50.809 "supported_io_types": { 00:08:50.809 "read": true, 00:08:50.809 "write": true, 00:08:50.809 "unmap": true, 00:08:50.809 "flush": true, 00:08:50.809 "reset": true, 00:08:50.809 "nvme_admin": false, 00:08:50.809 "nvme_io": false, 00:08:50.809 "nvme_io_md": false, 00:08:50.809 "write_zeroes": true, 00:08:50.809 "zcopy": true, 00:08:50.809 "get_zone_info": false, 00:08:50.809 "zone_management": false, 00:08:50.809 "zone_append": false, 00:08:50.809 "compare": false, 00:08:50.809 "compare_and_write": false, 00:08:50.809 "abort": true, 00:08:50.809 "seek_hole": false, 00:08:50.809 "seek_data": false, 00:08:50.809 "copy": true, 00:08:50.809 "nvme_iov_md": false 00:08:50.809 }, 00:08:50.809 "memory_domains": [ 00:08:50.809 { 00:08:50.809 "dma_device_id": "system", 00:08:50.809 "dma_device_type": 1 00:08:50.809 }, 00:08:50.809 { 00:08:50.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.809 "dma_device_type": 2 00:08:50.809 } 00:08:50.809 ], 00:08:50.809 "driver_specific": {} 00:08:50.809 } 00:08:50.809 ] 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.809 BaseBdev3 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.809 [ 00:08:50.809 { 00:08:50.809 "name": "BaseBdev3", 00:08:50.809 "aliases": [ 00:08:50.809 "73ab048c-0547-40c9-bf94-c2fbb16eb7fa" 00:08:50.809 ], 00:08:50.809 "product_name": "Malloc disk", 00:08:50.809 "block_size": 512, 00:08:50.809 "num_blocks": 65536, 00:08:50.809 "uuid": "73ab048c-0547-40c9-bf94-c2fbb16eb7fa", 00:08:50.809 "assigned_rate_limits": { 00:08:50.809 "rw_ios_per_sec": 0, 00:08:50.809 "rw_mbytes_per_sec": 0, 00:08:50.809 "r_mbytes_per_sec": 0, 00:08:50.809 "w_mbytes_per_sec": 0 00:08:50.809 }, 00:08:50.809 "claimed": false, 00:08:50.809 "zoned": false, 00:08:50.809 "supported_io_types": { 00:08:50.809 "read": true, 00:08:50.809 "write": true, 00:08:50.809 "unmap": true, 00:08:50.809 "flush": true, 00:08:50.809 "reset": true, 00:08:50.809 "nvme_admin": false, 00:08:50.809 "nvme_io": false, 00:08:50.809 "nvme_io_md": false, 00:08:50.809 "write_zeroes": true, 00:08:50.809 "zcopy": true, 00:08:50.809 "get_zone_info": false, 00:08:50.809 "zone_management": false, 00:08:50.809 "zone_append": false, 00:08:50.809 "compare": false, 00:08:50.809 "compare_and_write": false, 00:08:50.809 "abort": true, 00:08:50.809 "seek_hole": false, 00:08:50.809 "seek_data": false, 00:08:50.809 "copy": true, 00:08:50.809 "nvme_iov_md": false 00:08:50.809 }, 00:08:50.809 "memory_domains": [ 00:08:50.809 { 00:08:50.809 "dma_device_id": "system", 00:08:50.809 "dma_device_type": 1 00:08:50.809 }, 00:08:50.809 { 00:08:50.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.809 "dma_device_type": 2 00:08:50.809 } 00:08:50.809 ], 00:08:50.809 "driver_specific": {} 00:08:50.809 } 00:08:50.809 ] 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.809 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.068 BaseBdev4 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.068 [ 00:08:51.068 { 00:08:51.068 "name": "BaseBdev4", 00:08:51.068 "aliases": [ 00:08:51.068 "218404c7-51c0-4e46-896d-eb8d5fb33565" 00:08:51.068 ], 00:08:51.068 "product_name": "Malloc disk", 00:08:51.068 "block_size": 512, 00:08:51.068 "num_blocks": 65536, 00:08:51.068 "uuid": "218404c7-51c0-4e46-896d-eb8d5fb33565", 00:08:51.068 "assigned_rate_limits": { 00:08:51.068 "rw_ios_per_sec": 0, 00:08:51.068 "rw_mbytes_per_sec": 0, 00:08:51.068 "r_mbytes_per_sec": 0, 00:08:51.068 "w_mbytes_per_sec": 0 00:08:51.068 }, 00:08:51.068 "claimed": false, 00:08:51.068 "zoned": false, 00:08:51.068 "supported_io_types": { 00:08:51.068 "read": true, 00:08:51.068 "write": true, 00:08:51.068 "unmap": true, 00:08:51.068 "flush": true, 00:08:51.068 "reset": true, 00:08:51.068 "nvme_admin": false, 00:08:51.068 "nvme_io": false, 00:08:51.068 "nvme_io_md": false, 00:08:51.068 "write_zeroes": true, 00:08:51.068 "zcopy": true, 00:08:51.068 "get_zone_info": false, 00:08:51.068 "zone_management": false, 00:08:51.068 "zone_append": false, 00:08:51.068 "compare": false, 00:08:51.068 "compare_and_write": false, 00:08:51.068 "abort": true, 00:08:51.068 "seek_hole": false, 00:08:51.068 "seek_data": false, 00:08:51.068 "copy": true, 00:08:51.068 "nvme_iov_md": false 00:08:51.068 }, 00:08:51.068 "memory_domains": [ 00:08:51.068 { 00:08:51.068 "dma_device_id": "system", 00:08:51.068 "dma_device_type": 1 00:08:51.068 }, 00:08:51.068 { 00:08:51.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.068 "dma_device_type": 2 00:08:51.068 } 00:08:51.068 ], 00:08:51.068 "driver_specific": {} 00:08:51.068 } 00:08:51.068 ] 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.068 [2024-11-26 19:49:41.786339] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:51.068 [2024-11-26 19:49:41.786489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:51.068 [2024-11-26 19:49:41.786549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.068 [2024-11-26 19:49:41.788219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.068 [2024-11-26 19:49:41.788334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.068 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.069 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.069 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.069 "name": "Existed_Raid", 00:08:51.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.069 "strip_size_kb": 0, 00:08:51.069 "state": "configuring", 00:08:51.069 "raid_level": "raid1", 00:08:51.069 "superblock": false, 00:08:51.069 "num_base_bdevs": 4, 00:08:51.069 "num_base_bdevs_discovered": 3, 00:08:51.069 "num_base_bdevs_operational": 4, 00:08:51.069 "base_bdevs_list": [ 00:08:51.069 { 00:08:51.069 "name": "BaseBdev1", 00:08:51.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.069 "is_configured": false, 00:08:51.069 "data_offset": 0, 00:08:51.069 "data_size": 0 00:08:51.069 }, 00:08:51.069 { 00:08:51.069 "name": "BaseBdev2", 00:08:51.069 "uuid": "cd8c6a8e-b66e-43b4-b156-81023c1c21eb", 00:08:51.069 "is_configured": true, 00:08:51.069 "data_offset": 0, 00:08:51.069 "data_size": 65536 00:08:51.069 }, 00:08:51.069 { 00:08:51.069 "name": "BaseBdev3", 00:08:51.069 "uuid": "73ab048c-0547-40c9-bf94-c2fbb16eb7fa", 00:08:51.069 "is_configured": true, 00:08:51.069 "data_offset": 0, 00:08:51.069 "data_size": 65536 00:08:51.069 }, 00:08:51.069 { 00:08:51.069 "name": "BaseBdev4", 00:08:51.069 "uuid": "218404c7-51c0-4e46-896d-eb8d5fb33565", 00:08:51.069 "is_configured": true, 00:08:51.069 "data_offset": 0, 00:08:51.069 "data_size": 65536 00:08:51.069 } 00:08:51.069 ] 00:08:51.069 }' 00:08:51.069 19:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.069 19:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.327 [2024-11-26 19:49:42.114445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.327 "name": "Existed_Raid", 00:08:51.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.327 "strip_size_kb": 0, 00:08:51.327 "state": "configuring", 00:08:51.327 "raid_level": "raid1", 00:08:51.327 "superblock": false, 00:08:51.327 "num_base_bdevs": 4, 00:08:51.327 "num_base_bdevs_discovered": 2, 00:08:51.327 "num_base_bdevs_operational": 4, 00:08:51.327 "base_bdevs_list": [ 00:08:51.327 { 00:08:51.327 "name": "BaseBdev1", 00:08:51.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.327 "is_configured": false, 00:08:51.327 "data_offset": 0, 00:08:51.327 "data_size": 0 00:08:51.327 }, 00:08:51.327 { 00:08:51.327 "name": null, 00:08:51.327 "uuid": "cd8c6a8e-b66e-43b4-b156-81023c1c21eb", 00:08:51.327 "is_configured": false, 00:08:51.327 "data_offset": 0, 00:08:51.327 "data_size": 65536 00:08:51.327 }, 00:08:51.327 { 00:08:51.327 "name": "BaseBdev3", 00:08:51.327 "uuid": "73ab048c-0547-40c9-bf94-c2fbb16eb7fa", 00:08:51.327 "is_configured": true, 00:08:51.327 "data_offset": 0, 00:08:51.327 "data_size": 65536 00:08:51.327 }, 00:08:51.327 { 00:08:51.327 "name": "BaseBdev4", 00:08:51.327 "uuid": "218404c7-51c0-4e46-896d-eb8d5fb33565", 00:08:51.327 "is_configured": true, 00:08:51.327 "data_offset": 0, 00:08:51.327 "data_size": 65536 00:08:51.327 } 00:08:51.327 ] 00:08:51.327 }' 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.327 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.585 [2024-11-26 19:49:42.499148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.585 BaseBdev1 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.585 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.843 [ 00:08:51.843 { 00:08:51.843 "name": "BaseBdev1", 00:08:51.843 "aliases": [ 00:08:51.843 "2d7a6600-5ede-4fec-8d18-fe759bceef6c" 00:08:51.843 ], 00:08:51.843 "product_name": "Malloc disk", 00:08:51.843 "block_size": 512, 00:08:51.843 "num_blocks": 65536, 00:08:51.843 "uuid": "2d7a6600-5ede-4fec-8d18-fe759bceef6c", 00:08:51.843 "assigned_rate_limits": { 00:08:51.843 "rw_ios_per_sec": 0, 00:08:51.843 "rw_mbytes_per_sec": 0, 00:08:51.843 "r_mbytes_per_sec": 0, 00:08:51.843 "w_mbytes_per_sec": 0 00:08:51.843 }, 00:08:51.843 "claimed": true, 00:08:51.843 "claim_type": "exclusive_write", 00:08:51.843 "zoned": false, 00:08:51.843 "supported_io_types": { 00:08:51.843 "read": true, 00:08:51.843 "write": true, 00:08:51.843 "unmap": true, 00:08:51.843 "flush": true, 00:08:51.843 "reset": true, 00:08:51.843 "nvme_admin": false, 00:08:51.843 "nvme_io": false, 00:08:51.843 "nvme_io_md": false, 00:08:51.843 "write_zeroes": true, 00:08:51.843 "zcopy": true, 00:08:51.843 "get_zone_info": false, 00:08:51.843 "zone_management": false, 00:08:51.843 "zone_append": false, 00:08:51.843 "compare": false, 00:08:51.843 "compare_and_write": false, 00:08:51.843 "abort": true, 00:08:51.843 "seek_hole": false, 00:08:51.843 "seek_data": false, 00:08:51.843 "copy": true, 00:08:51.843 "nvme_iov_md": false 00:08:51.843 }, 00:08:51.843 "memory_domains": [ 00:08:51.843 { 00:08:51.843 "dma_device_id": "system", 00:08:51.843 "dma_device_type": 1 00:08:51.843 }, 00:08:51.843 { 00:08:51.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.843 "dma_device_type": 2 00:08:51.843 } 00:08:51.843 ], 00:08:51.843 "driver_specific": {} 00:08:51.843 } 00:08:51.843 ] 00:08:51.843 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.843 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:51.843 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:51.843 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.843 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.843 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.843 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.843 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:51.843 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.843 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.843 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.843 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.843 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.844 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.844 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.844 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.844 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.844 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.844 "name": "Existed_Raid", 00:08:51.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.844 "strip_size_kb": 0, 00:08:51.844 "state": "configuring", 00:08:51.844 "raid_level": "raid1", 00:08:51.844 "superblock": false, 00:08:51.844 "num_base_bdevs": 4, 00:08:51.844 "num_base_bdevs_discovered": 3, 00:08:51.844 "num_base_bdevs_operational": 4, 00:08:51.844 "base_bdevs_list": [ 00:08:51.844 { 00:08:51.844 "name": "BaseBdev1", 00:08:51.844 "uuid": "2d7a6600-5ede-4fec-8d18-fe759bceef6c", 00:08:51.844 "is_configured": true, 00:08:51.844 "data_offset": 0, 00:08:51.844 "data_size": 65536 00:08:51.844 }, 00:08:51.844 { 00:08:51.844 "name": null, 00:08:51.844 "uuid": "cd8c6a8e-b66e-43b4-b156-81023c1c21eb", 00:08:51.844 "is_configured": false, 00:08:51.844 "data_offset": 0, 00:08:51.844 "data_size": 65536 00:08:51.844 }, 00:08:51.844 { 00:08:51.844 "name": "BaseBdev3", 00:08:51.844 "uuid": "73ab048c-0547-40c9-bf94-c2fbb16eb7fa", 00:08:51.844 "is_configured": true, 00:08:51.844 "data_offset": 0, 00:08:51.844 "data_size": 65536 00:08:51.844 }, 00:08:51.844 { 00:08:51.844 "name": "BaseBdev4", 00:08:51.844 "uuid": "218404c7-51c0-4e46-896d-eb8d5fb33565", 00:08:51.844 "is_configured": true, 00:08:51.844 "data_offset": 0, 00:08:51.844 "data_size": 65536 00:08:51.844 } 00:08:51.844 ] 00:08:51.844 }' 00:08:51.844 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.844 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.102 [2024-11-26 19:49:42.859287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.102 "name": "Existed_Raid", 00:08:52.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.102 "strip_size_kb": 0, 00:08:52.102 "state": "configuring", 00:08:52.102 "raid_level": "raid1", 00:08:52.102 "superblock": false, 00:08:52.102 "num_base_bdevs": 4, 00:08:52.102 "num_base_bdevs_discovered": 2, 00:08:52.102 "num_base_bdevs_operational": 4, 00:08:52.102 "base_bdevs_list": [ 00:08:52.102 { 00:08:52.102 "name": "BaseBdev1", 00:08:52.102 "uuid": "2d7a6600-5ede-4fec-8d18-fe759bceef6c", 00:08:52.102 "is_configured": true, 00:08:52.102 "data_offset": 0, 00:08:52.102 "data_size": 65536 00:08:52.102 }, 00:08:52.102 { 00:08:52.102 "name": null, 00:08:52.102 "uuid": "cd8c6a8e-b66e-43b4-b156-81023c1c21eb", 00:08:52.102 "is_configured": false, 00:08:52.102 "data_offset": 0, 00:08:52.102 "data_size": 65536 00:08:52.102 }, 00:08:52.102 { 00:08:52.102 "name": null, 00:08:52.102 "uuid": "73ab048c-0547-40c9-bf94-c2fbb16eb7fa", 00:08:52.102 "is_configured": false, 00:08:52.102 "data_offset": 0, 00:08:52.102 "data_size": 65536 00:08:52.102 }, 00:08:52.102 { 00:08:52.102 "name": "BaseBdev4", 00:08:52.102 "uuid": "218404c7-51c0-4e46-896d-eb8d5fb33565", 00:08:52.102 "is_configured": true, 00:08:52.102 "data_offset": 0, 00:08:52.102 "data_size": 65536 00:08:52.102 } 00:08:52.102 ] 00:08:52.102 }' 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.102 19:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.361 [2024-11-26 19:49:43.219369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.361 "name": "Existed_Raid", 00:08:52.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.361 "strip_size_kb": 0, 00:08:52.361 "state": "configuring", 00:08:52.361 "raid_level": "raid1", 00:08:52.361 "superblock": false, 00:08:52.361 "num_base_bdevs": 4, 00:08:52.361 "num_base_bdevs_discovered": 3, 00:08:52.361 "num_base_bdevs_operational": 4, 00:08:52.361 "base_bdevs_list": [ 00:08:52.361 { 00:08:52.361 "name": "BaseBdev1", 00:08:52.361 "uuid": "2d7a6600-5ede-4fec-8d18-fe759bceef6c", 00:08:52.361 "is_configured": true, 00:08:52.361 "data_offset": 0, 00:08:52.361 "data_size": 65536 00:08:52.361 }, 00:08:52.361 { 00:08:52.361 "name": null, 00:08:52.361 "uuid": "cd8c6a8e-b66e-43b4-b156-81023c1c21eb", 00:08:52.361 "is_configured": false, 00:08:52.361 "data_offset": 0, 00:08:52.361 "data_size": 65536 00:08:52.361 }, 00:08:52.361 { 00:08:52.361 "name": "BaseBdev3", 00:08:52.361 "uuid": "73ab048c-0547-40c9-bf94-c2fbb16eb7fa", 00:08:52.361 "is_configured": true, 00:08:52.361 "data_offset": 0, 00:08:52.361 "data_size": 65536 00:08:52.361 }, 00:08:52.361 { 00:08:52.361 "name": "BaseBdev4", 00:08:52.361 "uuid": "218404c7-51c0-4e46-896d-eb8d5fb33565", 00:08:52.361 "is_configured": true, 00:08:52.361 "data_offset": 0, 00:08:52.361 "data_size": 65536 00:08:52.361 } 00:08:52.361 ] 00:08:52.361 }' 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.361 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.619 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.619 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.619 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:52.619 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.619 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.619 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:52.619 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:52.619 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.619 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.877 [2024-11-26 19:49:43.555479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:52.877 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.877 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:52.877 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.877 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.877 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.877 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.877 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:52.878 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.878 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.878 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.878 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.878 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.878 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.878 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.878 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.878 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.878 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.878 "name": "Existed_Raid", 00:08:52.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.878 "strip_size_kb": 0, 00:08:52.878 "state": "configuring", 00:08:52.878 "raid_level": "raid1", 00:08:52.878 "superblock": false, 00:08:52.878 "num_base_bdevs": 4, 00:08:52.878 "num_base_bdevs_discovered": 2, 00:08:52.878 "num_base_bdevs_operational": 4, 00:08:52.878 "base_bdevs_list": [ 00:08:52.878 { 00:08:52.878 "name": null, 00:08:52.878 "uuid": "2d7a6600-5ede-4fec-8d18-fe759bceef6c", 00:08:52.878 "is_configured": false, 00:08:52.878 "data_offset": 0, 00:08:52.878 "data_size": 65536 00:08:52.878 }, 00:08:52.878 { 00:08:52.878 "name": null, 00:08:52.878 "uuid": "cd8c6a8e-b66e-43b4-b156-81023c1c21eb", 00:08:52.878 "is_configured": false, 00:08:52.878 "data_offset": 0, 00:08:52.878 "data_size": 65536 00:08:52.878 }, 00:08:52.878 { 00:08:52.878 "name": "BaseBdev3", 00:08:52.878 "uuid": "73ab048c-0547-40c9-bf94-c2fbb16eb7fa", 00:08:52.878 "is_configured": true, 00:08:52.878 "data_offset": 0, 00:08:52.878 "data_size": 65536 00:08:52.878 }, 00:08:52.878 { 00:08:52.878 "name": "BaseBdev4", 00:08:52.878 "uuid": "218404c7-51c0-4e46-896d-eb8d5fb33565", 00:08:52.878 "is_configured": true, 00:08:52.878 "data_offset": 0, 00:08:52.878 "data_size": 65536 00:08:52.878 } 00:08:52.878 ] 00:08:52.878 }' 00:08:52.878 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.878 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.136 [2024-11-26 19:49:43.925421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.136 "name": "Existed_Raid", 00:08:53.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.136 "strip_size_kb": 0, 00:08:53.136 "state": "configuring", 00:08:53.136 "raid_level": "raid1", 00:08:53.136 "superblock": false, 00:08:53.136 "num_base_bdevs": 4, 00:08:53.136 "num_base_bdevs_discovered": 3, 00:08:53.136 "num_base_bdevs_operational": 4, 00:08:53.136 "base_bdevs_list": [ 00:08:53.136 { 00:08:53.136 "name": null, 00:08:53.136 "uuid": "2d7a6600-5ede-4fec-8d18-fe759bceef6c", 00:08:53.136 "is_configured": false, 00:08:53.136 "data_offset": 0, 00:08:53.136 "data_size": 65536 00:08:53.136 }, 00:08:53.136 { 00:08:53.136 "name": "BaseBdev2", 00:08:53.136 "uuid": "cd8c6a8e-b66e-43b4-b156-81023c1c21eb", 00:08:53.136 "is_configured": true, 00:08:53.136 "data_offset": 0, 00:08:53.136 "data_size": 65536 00:08:53.136 }, 00:08:53.136 { 00:08:53.136 "name": "BaseBdev3", 00:08:53.136 "uuid": "73ab048c-0547-40c9-bf94-c2fbb16eb7fa", 00:08:53.136 "is_configured": true, 00:08:53.136 "data_offset": 0, 00:08:53.136 "data_size": 65536 00:08:53.136 }, 00:08:53.136 { 00:08:53.136 "name": "BaseBdev4", 00:08:53.136 "uuid": "218404c7-51c0-4e46-896d-eb8d5fb33565", 00:08:53.136 "is_configured": true, 00:08:53.136 "data_offset": 0, 00:08:53.136 "data_size": 65536 00:08:53.136 } 00:08:53.136 ] 00:08:53.136 }' 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.136 19:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.393 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.393 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:53.393 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.393 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2d7a6600-5ede-4fec-8d18-fe759bceef6c 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.394 [2024-11-26 19:49:44.309660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:53.394 [2024-11-26 19:49:44.309697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:53.394 [2024-11-26 19:49:44.309704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:53.394 [2024-11-26 19:49:44.309922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:53.394 [2024-11-26 19:49:44.310046] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:53.394 [2024-11-26 19:49:44.310053] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:53.394 [2024-11-26 19:49:44.310251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.394 NewBaseBdev 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.394 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.652 [ 00:08:53.652 { 00:08:53.652 "name": "NewBaseBdev", 00:08:53.652 "aliases": [ 00:08:53.652 "2d7a6600-5ede-4fec-8d18-fe759bceef6c" 00:08:53.652 ], 00:08:53.652 "product_name": "Malloc disk", 00:08:53.652 "block_size": 512, 00:08:53.652 "num_blocks": 65536, 00:08:53.652 "uuid": "2d7a6600-5ede-4fec-8d18-fe759bceef6c", 00:08:53.652 "assigned_rate_limits": { 00:08:53.652 "rw_ios_per_sec": 0, 00:08:53.652 "rw_mbytes_per_sec": 0, 00:08:53.652 "r_mbytes_per_sec": 0, 00:08:53.652 "w_mbytes_per_sec": 0 00:08:53.652 }, 00:08:53.652 "claimed": true, 00:08:53.652 "claim_type": "exclusive_write", 00:08:53.652 "zoned": false, 00:08:53.652 "supported_io_types": { 00:08:53.652 "read": true, 00:08:53.652 "write": true, 00:08:53.652 "unmap": true, 00:08:53.652 "flush": true, 00:08:53.652 "reset": true, 00:08:53.652 "nvme_admin": false, 00:08:53.652 "nvme_io": false, 00:08:53.652 "nvme_io_md": false, 00:08:53.652 "write_zeroes": true, 00:08:53.652 "zcopy": true, 00:08:53.652 "get_zone_info": false, 00:08:53.652 "zone_management": false, 00:08:53.652 "zone_append": false, 00:08:53.652 "compare": false, 00:08:53.652 "compare_and_write": false, 00:08:53.652 "abort": true, 00:08:53.652 "seek_hole": false, 00:08:53.652 "seek_data": false, 00:08:53.652 "copy": true, 00:08:53.652 "nvme_iov_md": false 00:08:53.652 }, 00:08:53.652 "memory_domains": [ 00:08:53.652 { 00:08:53.652 "dma_device_id": "system", 00:08:53.652 "dma_device_type": 1 00:08:53.652 }, 00:08:53.652 { 00:08:53.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.652 "dma_device_type": 2 00:08:53.652 } 00:08:53.652 ], 00:08:53.652 "driver_specific": {} 00:08:53.652 } 00:08:53.652 ] 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.652 "name": "Existed_Raid", 00:08:53.652 "uuid": "9ec585f2-a5c2-4b44-8807-c0d414a0c378", 00:08:53.652 "strip_size_kb": 0, 00:08:53.652 "state": "online", 00:08:53.652 "raid_level": "raid1", 00:08:53.652 "superblock": false, 00:08:53.652 "num_base_bdevs": 4, 00:08:53.652 "num_base_bdevs_discovered": 4, 00:08:53.652 "num_base_bdevs_operational": 4, 00:08:53.652 "base_bdevs_list": [ 00:08:53.652 { 00:08:53.652 "name": "NewBaseBdev", 00:08:53.652 "uuid": "2d7a6600-5ede-4fec-8d18-fe759bceef6c", 00:08:53.652 "is_configured": true, 00:08:53.652 "data_offset": 0, 00:08:53.652 "data_size": 65536 00:08:53.652 }, 00:08:53.652 { 00:08:53.652 "name": "BaseBdev2", 00:08:53.652 "uuid": "cd8c6a8e-b66e-43b4-b156-81023c1c21eb", 00:08:53.652 "is_configured": true, 00:08:53.652 "data_offset": 0, 00:08:53.652 "data_size": 65536 00:08:53.652 }, 00:08:53.652 { 00:08:53.652 "name": "BaseBdev3", 00:08:53.652 "uuid": "73ab048c-0547-40c9-bf94-c2fbb16eb7fa", 00:08:53.652 "is_configured": true, 00:08:53.652 "data_offset": 0, 00:08:53.652 "data_size": 65536 00:08:53.652 }, 00:08:53.652 { 00:08:53.652 "name": "BaseBdev4", 00:08:53.652 "uuid": "218404c7-51c0-4e46-896d-eb8d5fb33565", 00:08:53.652 "is_configured": true, 00:08:53.652 "data_offset": 0, 00:08:53.652 "data_size": 65536 00:08:53.652 } 00:08:53.652 ] 00:08:53.652 }' 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.652 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.910 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:53.910 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:53.910 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.910 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.910 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.910 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.910 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:53.910 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.910 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.910 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.910 [2024-11-26 19:49:44.654080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.910 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.910 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.910 "name": "Existed_Raid", 00:08:53.910 "aliases": [ 00:08:53.910 "9ec585f2-a5c2-4b44-8807-c0d414a0c378" 00:08:53.910 ], 00:08:53.910 "product_name": "Raid Volume", 00:08:53.910 "block_size": 512, 00:08:53.910 "num_blocks": 65536, 00:08:53.910 "uuid": "9ec585f2-a5c2-4b44-8807-c0d414a0c378", 00:08:53.910 "assigned_rate_limits": { 00:08:53.910 "rw_ios_per_sec": 0, 00:08:53.910 "rw_mbytes_per_sec": 0, 00:08:53.910 "r_mbytes_per_sec": 0, 00:08:53.910 "w_mbytes_per_sec": 0 00:08:53.910 }, 00:08:53.910 "claimed": false, 00:08:53.910 "zoned": false, 00:08:53.910 "supported_io_types": { 00:08:53.910 "read": true, 00:08:53.910 "write": true, 00:08:53.910 "unmap": false, 00:08:53.910 "flush": false, 00:08:53.910 "reset": true, 00:08:53.910 "nvme_admin": false, 00:08:53.910 "nvme_io": false, 00:08:53.910 "nvme_io_md": false, 00:08:53.910 "write_zeroes": true, 00:08:53.910 "zcopy": false, 00:08:53.910 "get_zone_info": false, 00:08:53.910 "zone_management": false, 00:08:53.910 "zone_append": false, 00:08:53.910 "compare": false, 00:08:53.910 "compare_and_write": false, 00:08:53.910 "abort": false, 00:08:53.910 "seek_hole": false, 00:08:53.910 "seek_data": false, 00:08:53.910 "copy": false, 00:08:53.910 "nvme_iov_md": false 00:08:53.910 }, 00:08:53.910 "memory_domains": [ 00:08:53.910 { 00:08:53.910 "dma_device_id": "system", 00:08:53.910 "dma_device_type": 1 00:08:53.910 }, 00:08:53.910 { 00:08:53.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.910 "dma_device_type": 2 00:08:53.910 }, 00:08:53.910 { 00:08:53.910 "dma_device_id": "system", 00:08:53.910 "dma_device_type": 1 00:08:53.910 }, 00:08:53.910 { 00:08:53.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.910 "dma_device_type": 2 00:08:53.910 }, 00:08:53.910 { 00:08:53.910 "dma_device_id": "system", 00:08:53.910 "dma_device_type": 1 00:08:53.910 }, 00:08:53.910 { 00:08:53.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.910 "dma_device_type": 2 00:08:53.910 }, 00:08:53.910 { 00:08:53.910 "dma_device_id": "system", 00:08:53.910 "dma_device_type": 1 00:08:53.910 }, 00:08:53.910 { 00:08:53.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.910 "dma_device_type": 2 00:08:53.910 } 00:08:53.910 ], 00:08:53.910 "driver_specific": { 00:08:53.910 "raid": { 00:08:53.910 "uuid": "9ec585f2-a5c2-4b44-8807-c0d414a0c378", 00:08:53.910 "strip_size_kb": 0, 00:08:53.910 "state": "online", 00:08:53.910 "raid_level": "raid1", 00:08:53.910 "superblock": false, 00:08:53.910 "num_base_bdevs": 4, 00:08:53.910 "num_base_bdevs_discovered": 4, 00:08:53.910 "num_base_bdevs_operational": 4, 00:08:53.910 "base_bdevs_list": [ 00:08:53.910 { 00:08:53.910 "name": "NewBaseBdev", 00:08:53.910 "uuid": "2d7a6600-5ede-4fec-8d18-fe759bceef6c", 00:08:53.910 "is_configured": true, 00:08:53.910 "data_offset": 0, 00:08:53.910 "data_size": 65536 00:08:53.910 }, 00:08:53.910 { 00:08:53.910 "name": "BaseBdev2", 00:08:53.910 "uuid": "cd8c6a8e-b66e-43b4-b156-81023c1c21eb", 00:08:53.911 "is_configured": true, 00:08:53.911 "data_offset": 0, 00:08:53.911 "data_size": 65536 00:08:53.911 }, 00:08:53.911 { 00:08:53.911 "name": "BaseBdev3", 00:08:53.911 "uuid": "73ab048c-0547-40c9-bf94-c2fbb16eb7fa", 00:08:53.911 "is_configured": true, 00:08:53.911 "data_offset": 0, 00:08:53.911 "data_size": 65536 00:08:53.911 }, 00:08:53.911 { 00:08:53.911 "name": "BaseBdev4", 00:08:53.911 "uuid": "218404c7-51c0-4e46-896d-eb8d5fb33565", 00:08:53.911 "is_configured": true, 00:08:53.911 "data_offset": 0, 00:08:53.911 "data_size": 65536 00:08:53.911 } 00:08:53.911 ] 00:08:53.911 } 00:08:53.911 } 00:08:53.911 }' 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:53.911 BaseBdev2 00:08:53.911 BaseBdev3 00:08:53.911 BaseBdev4' 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.911 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.170 [2024-11-26 19:49:44.917775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.170 [2024-11-26 19:49:44.917800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.170 [2024-11-26 19:49:44.917872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.170 [2024-11-26 19:49:44.918131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.170 [2024-11-26 19:49:44.918142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71267 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71267 ']' 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71267 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71267 00:08:54.170 killing process with pid 71267 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71267' 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71267 00:08:54.170 [2024-11-26 19:49:44.948799] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.170 19:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71267 00:08:54.428 [2024-11-26 19:49:45.150633] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:54.994 00:08:54.994 real 0m8.015s 00:08:54.994 user 0m12.802s 00:08:54.994 sys 0m1.423s 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.994 ************************************ 00:08:54.994 END TEST raid_state_function_test 00:08:54.994 ************************************ 00:08:54.994 19:49:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:08:54.994 19:49:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:54.994 19:49:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.994 19:49:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.994 ************************************ 00:08:54.994 START TEST raid_state_function_test_sb 00:08:54.994 ************************************ 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:54.994 Process raid pid: 71905 00:08:54.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71905 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71905' 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71905 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71905 ']' 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.994 19:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.995 19:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.995 19:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.995 19:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.995 19:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:54.995 [2024-11-26 19:49:45.861995] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:54.995 [2024-11-26 19:49:45.862132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.253 [2024-11-26 19:49:46.021194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.253 [2024-11-26 19:49:46.121718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.540 [2024-11-26 19:49:46.242892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.540 [2024-11-26 19:49:46.242936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.798 [2024-11-26 19:49:46.708900] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.798 [2024-11-26 19:49:46.709049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.798 [2024-11-26 19:49:46.709067] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.798 [2024-11-26 19:49:46.709076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.798 [2024-11-26 19:49:46.709082] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.798 [2024-11-26 19:49:46.709089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.798 [2024-11-26 19:49:46.709094] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:55.798 [2024-11-26 19:49:46.709101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.798 19:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.093 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.093 "name": "Existed_Raid", 00:08:56.093 "uuid": "e206d26e-9ad1-4afa-b9c1-8b1c3fdef29b", 00:08:56.093 "strip_size_kb": 0, 00:08:56.093 "state": "configuring", 00:08:56.093 "raid_level": "raid1", 00:08:56.093 "superblock": true, 00:08:56.093 "num_base_bdevs": 4, 00:08:56.093 "num_base_bdevs_discovered": 0, 00:08:56.093 "num_base_bdevs_operational": 4, 00:08:56.093 "base_bdevs_list": [ 00:08:56.093 { 00:08:56.093 "name": "BaseBdev1", 00:08:56.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.093 "is_configured": false, 00:08:56.093 "data_offset": 0, 00:08:56.093 "data_size": 0 00:08:56.093 }, 00:08:56.093 { 00:08:56.093 "name": "BaseBdev2", 00:08:56.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.093 "is_configured": false, 00:08:56.093 "data_offset": 0, 00:08:56.094 "data_size": 0 00:08:56.094 }, 00:08:56.094 { 00:08:56.094 "name": "BaseBdev3", 00:08:56.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.094 "is_configured": false, 00:08:56.094 "data_offset": 0, 00:08:56.094 "data_size": 0 00:08:56.094 }, 00:08:56.094 { 00:08:56.094 "name": "BaseBdev4", 00:08:56.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.094 "is_configured": false, 00:08:56.094 "data_offset": 0, 00:08:56.094 "data_size": 0 00:08:56.094 } 00:08:56.094 ] 00:08:56.094 }' 00:08:56.094 19:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.094 19:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.094 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:56.094 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.094 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.352 [2024-11-26 19:49:47.028913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.352 [2024-11-26 19:49:47.028950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.352 [2024-11-26 19:49:47.036888] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.352 [2024-11-26 19:49:47.036924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.352 [2024-11-26 19:49:47.036932] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.352 [2024-11-26 19:49:47.036939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.352 [2024-11-26 19:49:47.036944] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.352 [2024-11-26 19:49:47.036951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.352 [2024-11-26 19:49:47.036956] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:56.352 [2024-11-26 19:49:47.036964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.352 [2024-11-26 19:49:47.067102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.352 BaseBdev1 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.352 [ 00:08:56.352 { 00:08:56.352 "name": "BaseBdev1", 00:08:56.352 "aliases": [ 00:08:56.352 "cefb2c61-7131-453d-b1ec-fb9340ac0c39" 00:08:56.352 ], 00:08:56.352 "product_name": "Malloc disk", 00:08:56.352 "block_size": 512, 00:08:56.352 "num_blocks": 65536, 00:08:56.352 "uuid": "cefb2c61-7131-453d-b1ec-fb9340ac0c39", 00:08:56.352 "assigned_rate_limits": { 00:08:56.352 "rw_ios_per_sec": 0, 00:08:56.352 "rw_mbytes_per_sec": 0, 00:08:56.352 "r_mbytes_per_sec": 0, 00:08:56.352 "w_mbytes_per_sec": 0 00:08:56.352 }, 00:08:56.352 "claimed": true, 00:08:56.352 "claim_type": "exclusive_write", 00:08:56.352 "zoned": false, 00:08:56.352 "supported_io_types": { 00:08:56.352 "read": true, 00:08:56.352 "write": true, 00:08:56.352 "unmap": true, 00:08:56.352 "flush": true, 00:08:56.352 "reset": true, 00:08:56.352 "nvme_admin": false, 00:08:56.352 "nvme_io": false, 00:08:56.352 "nvme_io_md": false, 00:08:56.352 "write_zeroes": true, 00:08:56.352 "zcopy": true, 00:08:56.352 "get_zone_info": false, 00:08:56.352 "zone_management": false, 00:08:56.352 "zone_append": false, 00:08:56.352 "compare": false, 00:08:56.352 "compare_and_write": false, 00:08:56.352 "abort": true, 00:08:56.352 "seek_hole": false, 00:08:56.352 "seek_data": false, 00:08:56.352 "copy": true, 00:08:56.352 "nvme_iov_md": false 00:08:56.352 }, 00:08:56.352 "memory_domains": [ 00:08:56.352 { 00:08:56.352 "dma_device_id": "system", 00:08:56.352 "dma_device_type": 1 00:08:56.352 }, 00:08:56.352 { 00:08:56.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.352 "dma_device_type": 2 00:08:56.352 } 00:08:56.352 ], 00:08:56.352 "driver_specific": {} 00:08:56.352 } 00:08:56.352 ] 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.352 "name": "Existed_Raid", 00:08:56.352 "uuid": "bb3d09f0-c85b-4deb-88e4-3778e20e4340", 00:08:56.352 "strip_size_kb": 0, 00:08:56.352 "state": "configuring", 00:08:56.352 "raid_level": "raid1", 00:08:56.352 "superblock": true, 00:08:56.352 "num_base_bdevs": 4, 00:08:56.352 "num_base_bdevs_discovered": 1, 00:08:56.352 "num_base_bdevs_operational": 4, 00:08:56.352 "base_bdevs_list": [ 00:08:56.352 { 00:08:56.352 "name": "BaseBdev1", 00:08:56.352 "uuid": "cefb2c61-7131-453d-b1ec-fb9340ac0c39", 00:08:56.352 "is_configured": true, 00:08:56.352 "data_offset": 2048, 00:08:56.352 "data_size": 63488 00:08:56.352 }, 00:08:56.352 { 00:08:56.352 "name": "BaseBdev2", 00:08:56.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.352 "is_configured": false, 00:08:56.352 "data_offset": 0, 00:08:56.352 "data_size": 0 00:08:56.352 }, 00:08:56.352 { 00:08:56.352 "name": "BaseBdev3", 00:08:56.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.352 "is_configured": false, 00:08:56.352 "data_offset": 0, 00:08:56.352 "data_size": 0 00:08:56.352 }, 00:08:56.352 { 00:08:56.352 "name": "BaseBdev4", 00:08:56.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.352 "is_configured": false, 00:08:56.352 "data_offset": 0, 00:08:56.352 "data_size": 0 00:08:56.352 } 00:08:56.352 ] 00:08:56.352 }' 00:08:56.352 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.353 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.610 [2024-11-26 19:49:47.423239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.610 [2024-11-26 19:49:47.423410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.610 [2024-11-26 19:49:47.431276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.610 [2024-11-26 19:49:47.433030] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.610 [2024-11-26 19:49:47.433071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.610 [2024-11-26 19:49:47.433079] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.610 [2024-11-26 19:49:47.433088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.610 [2024-11-26 19:49:47.433094] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:56.610 [2024-11-26 19:49:47.433100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.610 "name": "Existed_Raid", 00:08:56.610 "uuid": "312a534a-e741-4711-a80c-f0be218f0d07", 00:08:56.610 "strip_size_kb": 0, 00:08:56.610 "state": "configuring", 00:08:56.610 "raid_level": "raid1", 00:08:56.610 "superblock": true, 00:08:56.610 "num_base_bdevs": 4, 00:08:56.610 "num_base_bdevs_discovered": 1, 00:08:56.610 "num_base_bdevs_operational": 4, 00:08:56.610 "base_bdevs_list": [ 00:08:56.610 { 00:08:56.610 "name": "BaseBdev1", 00:08:56.610 "uuid": "cefb2c61-7131-453d-b1ec-fb9340ac0c39", 00:08:56.610 "is_configured": true, 00:08:56.610 "data_offset": 2048, 00:08:56.610 "data_size": 63488 00:08:56.610 }, 00:08:56.610 { 00:08:56.610 "name": "BaseBdev2", 00:08:56.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.610 "is_configured": false, 00:08:56.610 "data_offset": 0, 00:08:56.610 "data_size": 0 00:08:56.610 }, 00:08:56.610 { 00:08:56.610 "name": "BaseBdev3", 00:08:56.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.610 "is_configured": false, 00:08:56.610 "data_offset": 0, 00:08:56.610 "data_size": 0 00:08:56.610 }, 00:08:56.610 { 00:08:56.610 "name": "BaseBdev4", 00:08:56.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.610 "is_configured": false, 00:08:56.610 "data_offset": 0, 00:08:56.610 "data_size": 0 00:08:56.610 } 00:08:56.610 ] 00:08:56.610 }' 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.610 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.867 BaseBdev2 00:08:56.867 [2024-11-26 19:49:47.747840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.867 [ 00:08:56.867 { 00:08:56.867 "name": "BaseBdev2", 00:08:56.867 "aliases": [ 00:08:56.867 "34b01b13-e09a-4a75-aded-8e59a4ff1ab5" 00:08:56.867 ], 00:08:56.867 "product_name": "Malloc disk", 00:08:56.867 "block_size": 512, 00:08:56.867 "num_blocks": 65536, 00:08:56.867 "uuid": "34b01b13-e09a-4a75-aded-8e59a4ff1ab5", 00:08:56.867 "assigned_rate_limits": { 00:08:56.867 "rw_ios_per_sec": 0, 00:08:56.867 "rw_mbytes_per_sec": 0, 00:08:56.867 "r_mbytes_per_sec": 0, 00:08:56.867 "w_mbytes_per_sec": 0 00:08:56.867 }, 00:08:56.867 "claimed": true, 00:08:56.867 "claim_type": "exclusive_write", 00:08:56.867 "zoned": false, 00:08:56.867 "supported_io_types": { 00:08:56.867 "read": true, 00:08:56.867 "write": true, 00:08:56.867 "unmap": true, 00:08:56.867 "flush": true, 00:08:56.867 "reset": true, 00:08:56.867 "nvme_admin": false, 00:08:56.867 "nvme_io": false, 00:08:56.867 "nvme_io_md": false, 00:08:56.867 "write_zeroes": true, 00:08:56.867 "zcopy": true, 00:08:56.867 "get_zone_info": false, 00:08:56.867 "zone_management": false, 00:08:56.867 "zone_append": false, 00:08:56.867 "compare": false, 00:08:56.867 "compare_and_write": false, 00:08:56.867 "abort": true, 00:08:56.867 "seek_hole": false, 00:08:56.867 "seek_data": false, 00:08:56.867 "copy": true, 00:08:56.867 "nvme_iov_md": false 00:08:56.867 }, 00:08:56.867 "memory_domains": [ 00:08:56.867 { 00:08:56.867 "dma_device_id": "system", 00:08:56.867 "dma_device_type": 1 00:08:56.867 }, 00:08:56.867 { 00:08:56.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.867 "dma_device_type": 2 00:08:56.867 } 00:08:56.867 ], 00:08:56.867 "driver_specific": {} 00:08:56.867 } 00:08:56.867 ] 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.867 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.868 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.868 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.868 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:56.868 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.868 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.868 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.868 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.868 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.868 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.868 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.868 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.868 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.126 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.126 "name": "Existed_Raid", 00:08:57.126 "uuid": "312a534a-e741-4711-a80c-f0be218f0d07", 00:08:57.126 "strip_size_kb": 0, 00:08:57.126 "state": "configuring", 00:08:57.126 "raid_level": "raid1", 00:08:57.126 "superblock": true, 00:08:57.126 "num_base_bdevs": 4, 00:08:57.126 "num_base_bdevs_discovered": 2, 00:08:57.126 "num_base_bdevs_operational": 4, 00:08:57.126 "base_bdevs_list": [ 00:08:57.126 { 00:08:57.126 "name": "BaseBdev1", 00:08:57.126 "uuid": "cefb2c61-7131-453d-b1ec-fb9340ac0c39", 00:08:57.126 "is_configured": true, 00:08:57.126 "data_offset": 2048, 00:08:57.126 "data_size": 63488 00:08:57.126 }, 00:08:57.126 { 00:08:57.126 "name": "BaseBdev2", 00:08:57.126 "uuid": "34b01b13-e09a-4a75-aded-8e59a4ff1ab5", 00:08:57.126 "is_configured": true, 00:08:57.126 "data_offset": 2048, 00:08:57.126 "data_size": 63488 00:08:57.126 }, 00:08:57.126 { 00:08:57.126 "name": "BaseBdev3", 00:08:57.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.126 "is_configured": false, 00:08:57.126 "data_offset": 0, 00:08:57.126 "data_size": 0 00:08:57.126 }, 00:08:57.126 { 00:08:57.126 "name": "BaseBdev4", 00:08:57.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.126 "is_configured": false, 00:08:57.126 "data_offset": 0, 00:08:57.126 "data_size": 0 00:08:57.126 } 00:08:57.126 ] 00:08:57.126 }' 00:08:57.126 19:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.126 19:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.383 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:57.383 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.383 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.383 [2024-11-26 19:49:48.132840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:57.383 BaseBdev3 00:08:57.383 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.383 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:57.383 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:57.383 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.383 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:57.383 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.384 [ 00:08:57.384 { 00:08:57.384 "name": "BaseBdev3", 00:08:57.384 "aliases": [ 00:08:57.384 "8fcbddfc-f61f-40ba-aea4-6a22e8e10c3f" 00:08:57.384 ], 00:08:57.384 "product_name": "Malloc disk", 00:08:57.384 "block_size": 512, 00:08:57.384 "num_blocks": 65536, 00:08:57.384 "uuid": "8fcbddfc-f61f-40ba-aea4-6a22e8e10c3f", 00:08:57.384 "assigned_rate_limits": { 00:08:57.384 "rw_ios_per_sec": 0, 00:08:57.384 "rw_mbytes_per_sec": 0, 00:08:57.384 "r_mbytes_per_sec": 0, 00:08:57.384 "w_mbytes_per_sec": 0 00:08:57.384 }, 00:08:57.384 "claimed": true, 00:08:57.384 "claim_type": "exclusive_write", 00:08:57.384 "zoned": false, 00:08:57.384 "supported_io_types": { 00:08:57.384 "read": true, 00:08:57.384 "write": true, 00:08:57.384 "unmap": true, 00:08:57.384 "flush": true, 00:08:57.384 "reset": true, 00:08:57.384 "nvme_admin": false, 00:08:57.384 "nvme_io": false, 00:08:57.384 "nvme_io_md": false, 00:08:57.384 "write_zeroes": true, 00:08:57.384 "zcopy": true, 00:08:57.384 "get_zone_info": false, 00:08:57.384 "zone_management": false, 00:08:57.384 "zone_append": false, 00:08:57.384 "compare": false, 00:08:57.384 "compare_and_write": false, 00:08:57.384 "abort": true, 00:08:57.384 "seek_hole": false, 00:08:57.384 "seek_data": false, 00:08:57.384 "copy": true, 00:08:57.384 "nvme_iov_md": false 00:08:57.384 }, 00:08:57.384 "memory_domains": [ 00:08:57.384 { 00:08:57.384 "dma_device_id": "system", 00:08:57.384 "dma_device_type": 1 00:08:57.384 }, 00:08:57.384 { 00:08:57.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.384 "dma_device_type": 2 00:08:57.384 } 00:08:57.384 ], 00:08:57.384 "driver_specific": {} 00:08:57.384 } 00:08:57.384 ] 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.384 "name": "Existed_Raid", 00:08:57.384 "uuid": "312a534a-e741-4711-a80c-f0be218f0d07", 00:08:57.384 "strip_size_kb": 0, 00:08:57.384 "state": "configuring", 00:08:57.384 "raid_level": "raid1", 00:08:57.384 "superblock": true, 00:08:57.384 "num_base_bdevs": 4, 00:08:57.384 "num_base_bdevs_discovered": 3, 00:08:57.384 "num_base_bdevs_operational": 4, 00:08:57.384 "base_bdevs_list": [ 00:08:57.384 { 00:08:57.384 "name": "BaseBdev1", 00:08:57.384 "uuid": "cefb2c61-7131-453d-b1ec-fb9340ac0c39", 00:08:57.384 "is_configured": true, 00:08:57.384 "data_offset": 2048, 00:08:57.384 "data_size": 63488 00:08:57.384 }, 00:08:57.384 { 00:08:57.384 "name": "BaseBdev2", 00:08:57.384 "uuid": "34b01b13-e09a-4a75-aded-8e59a4ff1ab5", 00:08:57.384 "is_configured": true, 00:08:57.384 "data_offset": 2048, 00:08:57.384 "data_size": 63488 00:08:57.384 }, 00:08:57.384 { 00:08:57.384 "name": "BaseBdev3", 00:08:57.384 "uuid": "8fcbddfc-f61f-40ba-aea4-6a22e8e10c3f", 00:08:57.384 "is_configured": true, 00:08:57.384 "data_offset": 2048, 00:08:57.384 "data_size": 63488 00:08:57.384 }, 00:08:57.384 { 00:08:57.384 "name": "BaseBdev4", 00:08:57.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.384 "is_configured": false, 00:08:57.384 "data_offset": 0, 00:08:57.384 "data_size": 0 00:08:57.384 } 00:08:57.384 ] 00:08:57.384 }' 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.384 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.642 [2024-11-26 19:49:48.513708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:57.642 BaseBdev4 00:08:57.642 [2024-11-26 19:49:48.514172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:57.642 [2024-11-26 19:49:48.514193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:57.642 [2024-11-26 19:49:48.514559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:57.642 [2024-11-26 19:49:48.514728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:57.642 [2024-11-26 19:49:48.514740] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:57.642 [2024-11-26 19:49:48.514893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.642 [ 00:08:57.642 { 00:08:57.642 "name": "BaseBdev4", 00:08:57.642 "aliases": [ 00:08:57.642 "e20664ca-f0a8-4e25-ad02-dfcaf89cf312" 00:08:57.642 ], 00:08:57.642 "product_name": "Malloc disk", 00:08:57.642 "block_size": 512, 00:08:57.642 "num_blocks": 65536, 00:08:57.642 "uuid": "e20664ca-f0a8-4e25-ad02-dfcaf89cf312", 00:08:57.642 "assigned_rate_limits": { 00:08:57.642 "rw_ios_per_sec": 0, 00:08:57.642 "rw_mbytes_per_sec": 0, 00:08:57.642 "r_mbytes_per_sec": 0, 00:08:57.642 "w_mbytes_per_sec": 0 00:08:57.642 }, 00:08:57.642 "claimed": true, 00:08:57.642 "claim_type": "exclusive_write", 00:08:57.642 "zoned": false, 00:08:57.642 "supported_io_types": { 00:08:57.642 "read": true, 00:08:57.642 "write": true, 00:08:57.642 "unmap": true, 00:08:57.642 "flush": true, 00:08:57.642 "reset": true, 00:08:57.642 "nvme_admin": false, 00:08:57.642 "nvme_io": false, 00:08:57.642 "nvme_io_md": false, 00:08:57.642 "write_zeroes": true, 00:08:57.642 "zcopy": true, 00:08:57.642 "get_zone_info": false, 00:08:57.642 "zone_management": false, 00:08:57.642 "zone_append": false, 00:08:57.642 "compare": false, 00:08:57.642 "compare_and_write": false, 00:08:57.642 "abort": true, 00:08:57.642 "seek_hole": false, 00:08:57.642 "seek_data": false, 00:08:57.642 "copy": true, 00:08:57.642 "nvme_iov_md": false 00:08:57.642 }, 00:08:57.642 "memory_domains": [ 00:08:57.642 { 00:08:57.642 "dma_device_id": "system", 00:08:57.642 "dma_device_type": 1 00:08:57.642 }, 00:08:57.642 { 00:08:57.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.642 "dma_device_type": 2 00:08:57.642 } 00:08:57.642 ], 00:08:57.642 "driver_specific": {} 00:08:57.642 } 00:08:57.642 ] 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.642 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.643 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.643 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.643 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.643 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.901 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.901 "name": "Existed_Raid", 00:08:57.901 "uuid": "312a534a-e741-4711-a80c-f0be218f0d07", 00:08:57.901 "strip_size_kb": 0, 00:08:57.901 "state": "online", 00:08:57.901 "raid_level": "raid1", 00:08:57.901 "superblock": true, 00:08:57.901 "num_base_bdevs": 4, 00:08:57.901 "num_base_bdevs_discovered": 4, 00:08:57.901 "num_base_bdevs_operational": 4, 00:08:57.901 "base_bdevs_list": [ 00:08:57.901 { 00:08:57.901 "name": "BaseBdev1", 00:08:57.901 "uuid": "cefb2c61-7131-453d-b1ec-fb9340ac0c39", 00:08:57.901 "is_configured": true, 00:08:57.901 "data_offset": 2048, 00:08:57.901 "data_size": 63488 00:08:57.901 }, 00:08:57.901 { 00:08:57.901 "name": "BaseBdev2", 00:08:57.901 "uuid": "34b01b13-e09a-4a75-aded-8e59a4ff1ab5", 00:08:57.901 "is_configured": true, 00:08:57.901 "data_offset": 2048, 00:08:57.901 "data_size": 63488 00:08:57.901 }, 00:08:57.901 { 00:08:57.901 "name": "BaseBdev3", 00:08:57.901 "uuid": "8fcbddfc-f61f-40ba-aea4-6a22e8e10c3f", 00:08:57.901 "is_configured": true, 00:08:57.901 "data_offset": 2048, 00:08:57.901 "data_size": 63488 00:08:57.901 }, 00:08:57.901 { 00:08:57.901 "name": "BaseBdev4", 00:08:57.901 "uuid": "e20664ca-f0a8-4e25-ad02-dfcaf89cf312", 00:08:57.901 "is_configured": true, 00:08:57.901 "data_offset": 2048, 00:08:57.901 "data_size": 63488 00:08:57.901 } 00:08:57.901 ] 00:08:57.901 }' 00:08:57.901 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.901 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.160 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:58.160 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:58.160 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:58.160 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:58.160 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:58.160 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:58.160 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:58.160 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:58.160 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.160 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.160 [2024-11-26 19:49:48.870229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.160 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.160 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:58.160 "name": "Existed_Raid", 00:08:58.160 "aliases": [ 00:08:58.160 "312a534a-e741-4711-a80c-f0be218f0d07" 00:08:58.160 ], 00:08:58.160 "product_name": "Raid Volume", 00:08:58.160 "block_size": 512, 00:08:58.160 "num_blocks": 63488, 00:08:58.160 "uuid": "312a534a-e741-4711-a80c-f0be218f0d07", 00:08:58.160 "assigned_rate_limits": { 00:08:58.160 "rw_ios_per_sec": 0, 00:08:58.160 "rw_mbytes_per_sec": 0, 00:08:58.160 "r_mbytes_per_sec": 0, 00:08:58.160 "w_mbytes_per_sec": 0 00:08:58.160 }, 00:08:58.160 "claimed": false, 00:08:58.160 "zoned": false, 00:08:58.160 "supported_io_types": { 00:08:58.160 "read": true, 00:08:58.160 "write": true, 00:08:58.160 "unmap": false, 00:08:58.160 "flush": false, 00:08:58.160 "reset": true, 00:08:58.160 "nvme_admin": false, 00:08:58.160 "nvme_io": false, 00:08:58.160 "nvme_io_md": false, 00:08:58.160 "write_zeroes": true, 00:08:58.160 "zcopy": false, 00:08:58.160 "get_zone_info": false, 00:08:58.160 "zone_management": false, 00:08:58.160 "zone_append": false, 00:08:58.160 "compare": false, 00:08:58.160 "compare_and_write": false, 00:08:58.160 "abort": false, 00:08:58.160 "seek_hole": false, 00:08:58.160 "seek_data": false, 00:08:58.160 "copy": false, 00:08:58.160 "nvme_iov_md": false 00:08:58.160 }, 00:08:58.160 "memory_domains": [ 00:08:58.160 { 00:08:58.160 "dma_device_id": "system", 00:08:58.160 "dma_device_type": 1 00:08:58.160 }, 00:08:58.160 { 00:08:58.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.160 "dma_device_type": 2 00:08:58.160 }, 00:08:58.160 { 00:08:58.160 "dma_device_id": "system", 00:08:58.160 "dma_device_type": 1 00:08:58.160 }, 00:08:58.160 { 00:08:58.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.160 "dma_device_type": 2 00:08:58.160 }, 00:08:58.160 { 00:08:58.160 "dma_device_id": "system", 00:08:58.160 "dma_device_type": 1 00:08:58.160 }, 00:08:58.160 { 00:08:58.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.160 "dma_device_type": 2 00:08:58.160 }, 00:08:58.160 { 00:08:58.160 "dma_device_id": "system", 00:08:58.160 "dma_device_type": 1 00:08:58.160 }, 00:08:58.160 { 00:08:58.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.160 "dma_device_type": 2 00:08:58.160 } 00:08:58.160 ], 00:08:58.160 "driver_specific": { 00:08:58.160 "raid": { 00:08:58.160 "uuid": "312a534a-e741-4711-a80c-f0be218f0d07", 00:08:58.160 "strip_size_kb": 0, 00:08:58.160 "state": "online", 00:08:58.160 "raid_level": "raid1", 00:08:58.160 "superblock": true, 00:08:58.160 "num_base_bdevs": 4, 00:08:58.160 "num_base_bdevs_discovered": 4, 00:08:58.160 "num_base_bdevs_operational": 4, 00:08:58.160 "base_bdevs_list": [ 00:08:58.160 { 00:08:58.160 "name": "BaseBdev1", 00:08:58.160 "uuid": "cefb2c61-7131-453d-b1ec-fb9340ac0c39", 00:08:58.160 "is_configured": true, 00:08:58.160 "data_offset": 2048, 00:08:58.160 "data_size": 63488 00:08:58.160 }, 00:08:58.160 { 00:08:58.160 "name": "BaseBdev2", 00:08:58.160 "uuid": "34b01b13-e09a-4a75-aded-8e59a4ff1ab5", 00:08:58.160 "is_configured": true, 00:08:58.160 "data_offset": 2048, 00:08:58.160 "data_size": 63488 00:08:58.160 }, 00:08:58.160 { 00:08:58.160 "name": "BaseBdev3", 00:08:58.160 "uuid": "8fcbddfc-f61f-40ba-aea4-6a22e8e10c3f", 00:08:58.160 "is_configured": true, 00:08:58.160 "data_offset": 2048, 00:08:58.160 "data_size": 63488 00:08:58.160 }, 00:08:58.160 { 00:08:58.160 "name": "BaseBdev4", 00:08:58.160 "uuid": "e20664ca-f0a8-4e25-ad02-dfcaf89cf312", 00:08:58.160 "is_configured": true, 00:08:58.160 "data_offset": 2048, 00:08:58.160 "data_size": 63488 00:08:58.160 } 00:08:58.160 ] 00:08:58.160 } 00:08:58.160 } 00:08:58.160 }' 00:08:58.160 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:58.160 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:58.160 BaseBdev2 00:08:58.160 BaseBdev3 00:08:58.160 BaseBdev4' 00:08:58.161 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.161 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:58.161 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.161 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:58.161 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.161 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.161 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.161 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.161 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.161 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.161 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.161 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.161 19:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:58.161 19:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.161 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.161 [2024-11-26 19:49:49.093943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.419 "name": "Existed_Raid", 00:08:58.419 "uuid": "312a534a-e741-4711-a80c-f0be218f0d07", 00:08:58.419 "strip_size_kb": 0, 00:08:58.419 "state": "online", 00:08:58.419 "raid_level": "raid1", 00:08:58.419 "superblock": true, 00:08:58.419 "num_base_bdevs": 4, 00:08:58.419 "num_base_bdevs_discovered": 3, 00:08:58.419 "num_base_bdevs_operational": 3, 00:08:58.419 "base_bdevs_list": [ 00:08:58.419 { 00:08:58.419 "name": null, 00:08:58.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.419 "is_configured": false, 00:08:58.419 "data_offset": 0, 00:08:58.419 "data_size": 63488 00:08:58.419 }, 00:08:58.419 { 00:08:58.419 "name": "BaseBdev2", 00:08:58.419 "uuid": "34b01b13-e09a-4a75-aded-8e59a4ff1ab5", 00:08:58.419 "is_configured": true, 00:08:58.419 "data_offset": 2048, 00:08:58.419 "data_size": 63488 00:08:58.419 }, 00:08:58.419 { 00:08:58.419 "name": "BaseBdev3", 00:08:58.419 "uuid": "8fcbddfc-f61f-40ba-aea4-6a22e8e10c3f", 00:08:58.419 "is_configured": true, 00:08:58.419 "data_offset": 2048, 00:08:58.419 "data_size": 63488 00:08:58.419 }, 00:08:58.419 { 00:08:58.419 "name": "BaseBdev4", 00:08:58.419 "uuid": "e20664ca-f0a8-4e25-ad02-dfcaf89cf312", 00:08:58.419 "is_configured": true, 00:08:58.419 "data_offset": 2048, 00:08:58.419 "data_size": 63488 00:08:58.419 } 00:08:58.419 ] 00:08:58.419 }' 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.419 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.677 [2024-11-26 19:49:49.499934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.677 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.677 [2024-11-26 19:49:49.601546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.935 [2024-11-26 19:49:49.698866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:58.935 [2024-11-26 19:49:49.698987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.935 [2024-11-26 19:49:49.760811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.935 [2024-11-26 19:49:49.760869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.935 [2024-11-26 19:49:49.760883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.935 BaseBdev2 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.935 [ 00:08:58.935 { 00:08:58.935 "name": "BaseBdev2", 00:08:58.935 "aliases": [ 00:08:58.935 "c44bae53-c65a-4b3c-993e-788f39bab30f" 00:08:58.935 ], 00:08:58.935 "product_name": "Malloc disk", 00:08:58.935 "block_size": 512, 00:08:58.935 "num_blocks": 65536, 00:08:58.935 "uuid": "c44bae53-c65a-4b3c-993e-788f39bab30f", 00:08:58.935 "assigned_rate_limits": { 00:08:58.935 "rw_ios_per_sec": 0, 00:08:58.935 "rw_mbytes_per_sec": 0, 00:08:58.935 "r_mbytes_per_sec": 0, 00:08:58.935 "w_mbytes_per_sec": 0 00:08:58.935 }, 00:08:58.935 "claimed": false, 00:08:58.935 "zoned": false, 00:08:58.935 "supported_io_types": { 00:08:58.935 "read": true, 00:08:58.935 "write": true, 00:08:58.935 "unmap": true, 00:08:58.935 "flush": true, 00:08:58.935 "reset": true, 00:08:58.935 "nvme_admin": false, 00:08:58.935 "nvme_io": false, 00:08:58.935 "nvme_io_md": false, 00:08:58.935 "write_zeroes": true, 00:08:58.935 "zcopy": true, 00:08:58.935 "get_zone_info": false, 00:08:58.935 "zone_management": false, 00:08:58.935 "zone_append": false, 00:08:58.935 "compare": false, 00:08:58.935 "compare_and_write": false, 00:08:58.935 "abort": true, 00:08:58.935 "seek_hole": false, 00:08:58.935 "seek_data": false, 00:08:58.935 "copy": true, 00:08:58.935 "nvme_iov_md": false 00:08:58.935 }, 00:08:58.935 "memory_domains": [ 00:08:58.935 { 00:08:58.935 "dma_device_id": "system", 00:08:58.935 "dma_device_type": 1 00:08:58.935 }, 00:08:58.935 { 00:08:58.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.935 "dma_device_type": 2 00:08:58.935 } 00:08:58.935 ], 00:08:58.935 "driver_specific": {} 00:08:58.935 } 00:08:58.935 ] 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.935 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.193 BaseBdev3 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.193 [ 00:08:59.193 { 00:08:59.193 "name": "BaseBdev3", 00:08:59.193 "aliases": [ 00:08:59.193 "a201447e-578c-454e-a6e4-11954cd5763c" 00:08:59.193 ], 00:08:59.193 "product_name": "Malloc disk", 00:08:59.193 "block_size": 512, 00:08:59.193 "num_blocks": 65536, 00:08:59.193 "uuid": "a201447e-578c-454e-a6e4-11954cd5763c", 00:08:59.193 "assigned_rate_limits": { 00:08:59.193 "rw_ios_per_sec": 0, 00:08:59.193 "rw_mbytes_per_sec": 0, 00:08:59.193 "r_mbytes_per_sec": 0, 00:08:59.193 "w_mbytes_per_sec": 0 00:08:59.193 }, 00:08:59.193 "claimed": false, 00:08:59.193 "zoned": false, 00:08:59.193 "supported_io_types": { 00:08:59.193 "read": true, 00:08:59.193 "write": true, 00:08:59.193 "unmap": true, 00:08:59.193 "flush": true, 00:08:59.193 "reset": true, 00:08:59.193 "nvme_admin": false, 00:08:59.193 "nvme_io": false, 00:08:59.193 "nvme_io_md": false, 00:08:59.193 "write_zeroes": true, 00:08:59.193 "zcopy": true, 00:08:59.193 "get_zone_info": false, 00:08:59.193 "zone_management": false, 00:08:59.193 "zone_append": false, 00:08:59.193 "compare": false, 00:08:59.193 "compare_and_write": false, 00:08:59.193 "abort": true, 00:08:59.193 "seek_hole": false, 00:08:59.193 "seek_data": false, 00:08:59.193 "copy": true, 00:08:59.193 "nvme_iov_md": false 00:08:59.193 }, 00:08:59.193 "memory_domains": [ 00:08:59.193 { 00:08:59.193 "dma_device_id": "system", 00:08:59.193 "dma_device_type": 1 00:08:59.193 }, 00:08:59.193 { 00:08:59.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.193 "dma_device_type": 2 00:08:59.193 } 00:08:59.193 ], 00:08:59.193 "driver_specific": {} 00:08:59.193 } 00:08:59.193 ] 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.193 BaseBdev4 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.193 [ 00:08:59.193 { 00:08:59.193 "name": "BaseBdev4", 00:08:59.193 "aliases": [ 00:08:59.193 "a02fec7e-e8b9-48eb-845b-714142a36006" 00:08:59.193 ], 00:08:59.193 "product_name": "Malloc disk", 00:08:59.193 "block_size": 512, 00:08:59.193 "num_blocks": 65536, 00:08:59.193 "uuid": "a02fec7e-e8b9-48eb-845b-714142a36006", 00:08:59.193 "assigned_rate_limits": { 00:08:59.193 "rw_ios_per_sec": 0, 00:08:59.193 "rw_mbytes_per_sec": 0, 00:08:59.193 "r_mbytes_per_sec": 0, 00:08:59.193 "w_mbytes_per_sec": 0 00:08:59.193 }, 00:08:59.193 "claimed": false, 00:08:59.193 "zoned": false, 00:08:59.193 "supported_io_types": { 00:08:59.193 "read": true, 00:08:59.193 "write": true, 00:08:59.193 "unmap": true, 00:08:59.193 "flush": true, 00:08:59.193 "reset": true, 00:08:59.193 "nvme_admin": false, 00:08:59.193 "nvme_io": false, 00:08:59.193 "nvme_io_md": false, 00:08:59.193 "write_zeroes": true, 00:08:59.193 "zcopy": true, 00:08:59.193 "get_zone_info": false, 00:08:59.193 "zone_management": false, 00:08:59.193 "zone_append": false, 00:08:59.193 "compare": false, 00:08:59.193 "compare_and_write": false, 00:08:59.193 "abort": true, 00:08:59.193 "seek_hole": false, 00:08:59.193 "seek_data": false, 00:08:59.193 "copy": true, 00:08:59.193 "nvme_iov_md": false 00:08:59.193 }, 00:08:59.193 "memory_domains": [ 00:08:59.193 { 00:08:59.193 "dma_device_id": "system", 00:08:59.193 "dma_device_type": 1 00:08:59.193 }, 00:08:59.193 { 00:08:59.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.193 "dma_device_type": 2 00:08:59.193 } 00:08:59.193 ], 00:08:59.193 "driver_specific": {} 00:08:59.193 } 00:08:59.193 ] 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:59.193 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.194 [2024-11-26 19:49:49.987614] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.194 [2024-11-26 19:49:49.987757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.194 [2024-11-26 19:49:49.987824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.194 [2024-11-26 19:49:49.989772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.194 [2024-11-26 19:49:49.989893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.194 19:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.194 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.194 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.194 "name": "Existed_Raid", 00:08:59.194 "uuid": "9c729a08-1831-433d-9d86-a6dbd14fe150", 00:08:59.194 "strip_size_kb": 0, 00:08:59.194 "state": "configuring", 00:08:59.194 "raid_level": "raid1", 00:08:59.194 "superblock": true, 00:08:59.194 "num_base_bdevs": 4, 00:08:59.194 "num_base_bdevs_discovered": 3, 00:08:59.194 "num_base_bdevs_operational": 4, 00:08:59.194 "base_bdevs_list": [ 00:08:59.194 { 00:08:59.194 "name": "BaseBdev1", 00:08:59.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.194 "is_configured": false, 00:08:59.194 "data_offset": 0, 00:08:59.194 "data_size": 0 00:08:59.194 }, 00:08:59.194 { 00:08:59.194 "name": "BaseBdev2", 00:08:59.194 "uuid": "c44bae53-c65a-4b3c-993e-788f39bab30f", 00:08:59.194 "is_configured": true, 00:08:59.194 "data_offset": 2048, 00:08:59.194 "data_size": 63488 00:08:59.194 }, 00:08:59.194 { 00:08:59.194 "name": "BaseBdev3", 00:08:59.194 "uuid": "a201447e-578c-454e-a6e4-11954cd5763c", 00:08:59.194 "is_configured": true, 00:08:59.194 "data_offset": 2048, 00:08:59.194 "data_size": 63488 00:08:59.194 }, 00:08:59.194 { 00:08:59.194 "name": "BaseBdev4", 00:08:59.194 "uuid": "a02fec7e-e8b9-48eb-845b-714142a36006", 00:08:59.194 "is_configured": true, 00:08:59.194 "data_offset": 2048, 00:08:59.194 "data_size": 63488 00:08:59.194 } 00:08:59.194 ] 00:08:59.194 }' 00:08:59.194 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.194 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.452 [2024-11-26 19:49:50.319727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.452 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.452 "name": "Existed_Raid", 00:08:59.452 "uuid": "9c729a08-1831-433d-9d86-a6dbd14fe150", 00:08:59.452 "strip_size_kb": 0, 00:08:59.452 "state": "configuring", 00:08:59.452 "raid_level": "raid1", 00:08:59.452 "superblock": true, 00:08:59.452 "num_base_bdevs": 4, 00:08:59.452 "num_base_bdevs_discovered": 2, 00:08:59.452 "num_base_bdevs_operational": 4, 00:08:59.452 "base_bdevs_list": [ 00:08:59.452 { 00:08:59.452 "name": "BaseBdev1", 00:08:59.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.453 "is_configured": false, 00:08:59.453 "data_offset": 0, 00:08:59.453 "data_size": 0 00:08:59.453 }, 00:08:59.453 { 00:08:59.453 "name": null, 00:08:59.453 "uuid": "c44bae53-c65a-4b3c-993e-788f39bab30f", 00:08:59.453 "is_configured": false, 00:08:59.453 "data_offset": 0, 00:08:59.453 "data_size": 63488 00:08:59.453 }, 00:08:59.453 { 00:08:59.453 "name": "BaseBdev3", 00:08:59.453 "uuid": "a201447e-578c-454e-a6e4-11954cd5763c", 00:08:59.453 "is_configured": true, 00:08:59.453 "data_offset": 2048, 00:08:59.453 "data_size": 63488 00:08:59.453 }, 00:08:59.453 { 00:08:59.453 "name": "BaseBdev4", 00:08:59.453 "uuid": "a02fec7e-e8b9-48eb-845b-714142a36006", 00:08:59.453 "is_configured": true, 00:08:59.453 "data_offset": 2048, 00:08:59.453 "data_size": 63488 00:08:59.453 } 00:08:59.453 ] 00:08:59.453 }' 00:08:59.453 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.453 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.019 [2024-11-26 19:49:50.720381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.019 BaseBdev1 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.019 [ 00:09:00.019 { 00:09:00.019 "name": "BaseBdev1", 00:09:00.019 "aliases": [ 00:09:00.019 "d3b10395-cb2b-4c70-894c-818b4c5c19e1" 00:09:00.019 ], 00:09:00.019 "product_name": "Malloc disk", 00:09:00.019 "block_size": 512, 00:09:00.019 "num_blocks": 65536, 00:09:00.019 "uuid": "d3b10395-cb2b-4c70-894c-818b4c5c19e1", 00:09:00.019 "assigned_rate_limits": { 00:09:00.019 "rw_ios_per_sec": 0, 00:09:00.019 "rw_mbytes_per_sec": 0, 00:09:00.019 "r_mbytes_per_sec": 0, 00:09:00.019 "w_mbytes_per_sec": 0 00:09:00.019 }, 00:09:00.019 "claimed": true, 00:09:00.019 "claim_type": "exclusive_write", 00:09:00.019 "zoned": false, 00:09:00.019 "supported_io_types": { 00:09:00.019 "read": true, 00:09:00.019 "write": true, 00:09:00.019 "unmap": true, 00:09:00.019 "flush": true, 00:09:00.019 "reset": true, 00:09:00.019 "nvme_admin": false, 00:09:00.019 "nvme_io": false, 00:09:00.019 "nvme_io_md": false, 00:09:00.019 "write_zeroes": true, 00:09:00.019 "zcopy": true, 00:09:00.019 "get_zone_info": false, 00:09:00.019 "zone_management": false, 00:09:00.019 "zone_append": false, 00:09:00.019 "compare": false, 00:09:00.019 "compare_and_write": false, 00:09:00.019 "abort": true, 00:09:00.019 "seek_hole": false, 00:09:00.019 "seek_data": false, 00:09:00.019 "copy": true, 00:09:00.019 "nvme_iov_md": false 00:09:00.019 }, 00:09:00.019 "memory_domains": [ 00:09:00.019 { 00:09:00.019 "dma_device_id": "system", 00:09:00.019 "dma_device_type": 1 00:09:00.019 }, 00:09:00.019 { 00:09:00.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.019 "dma_device_type": 2 00:09:00.019 } 00:09:00.019 ], 00:09:00.019 "driver_specific": {} 00:09:00.019 } 00:09:00.019 ] 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.019 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.019 "name": "Existed_Raid", 00:09:00.019 "uuid": "9c729a08-1831-433d-9d86-a6dbd14fe150", 00:09:00.019 "strip_size_kb": 0, 00:09:00.019 "state": "configuring", 00:09:00.019 "raid_level": "raid1", 00:09:00.019 "superblock": true, 00:09:00.019 "num_base_bdevs": 4, 00:09:00.019 "num_base_bdevs_discovered": 3, 00:09:00.019 "num_base_bdevs_operational": 4, 00:09:00.019 "base_bdevs_list": [ 00:09:00.019 { 00:09:00.019 "name": "BaseBdev1", 00:09:00.020 "uuid": "d3b10395-cb2b-4c70-894c-818b4c5c19e1", 00:09:00.020 "is_configured": true, 00:09:00.020 "data_offset": 2048, 00:09:00.020 "data_size": 63488 00:09:00.020 }, 00:09:00.020 { 00:09:00.020 "name": null, 00:09:00.020 "uuid": "c44bae53-c65a-4b3c-993e-788f39bab30f", 00:09:00.020 "is_configured": false, 00:09:00.020 "data_offset": 0, 00:09:00.020 "data_size": 63488 00:09:00.020 }, 00:09:00.020 { 00:09:00.020 "name": "BaseBdev3", 00:09:00.020 "uuid": "a201447e-578c-454e-a6e4-11954cd5763c", 00:09:00.020 "is_configured": true, 00:09:00.020 "data_offset": 2048, 00:09:00.020 "data_size": 63488 00:09:00.020 }, 00:09:00.020 { 00:09:00.020 "name": "BaseBdev4", 00:09:00.020 "uuid": "a02fec7e-e8b9-48eb-845b-714142a36006", 00:09:00.020 "is_configured": true, 00:09:00.020 "data_offset": 2048, 00:09:00.020 "data_size": 63488 00:09:00.020 } 00:09:00.020 ] 00:09:00.020 }' 00:09:00.020 19:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.020 19:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.279 [2024-11-26 19:49:51.096558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.279 "name": "Existed_Raid", 00:09:00.279 "uuid": "9c729a08-1831-433d-9d86-a6dbd14fe150", 00:09:00.279 "strip_size_kb": 0, 00:09:00.279 "state": "configuring", 00:09:00.279 "raid_level": "raid1", 00:09:00.279 "superblock": true, 00:09:00.279 "num_base_bdevs": 4, 00:09:00.279 "num_base_bdevs_discovered": 2, 00:09:00.279 "num_base_bdevs_operational": 4, 00:09:00.279 "base_bdevs_list": [ 00:09:00.279 { 00:09:00.279 "name": "BaseBdev1", 00:09:00.279 "uuid": "d3b10395-cb2b-4c70-894c-818b4c5c19e1", 00:09:00.279 "is_configured": true, 00:09:00.279 "data_offset": 2048, 00:09:00.279 "data_size": 63488 00:09:00.279 }, 00:09:00.279 { 00:09:00.279 "name": null, 00:09:00.279 "uuid": "c44bae53-c65a-4b3c-993e-788f39bab30f", 00:09:00.279 "is_configured": false, 00:09:00.279 "data_offset": 0, 00:09:00.279 "data_size": 63488 00:09:00.279 }, 00:09:00.279 { 00:09:00.279 "name": null, 00:09:00.279 "uuid": "a201447e-578c-454e-a6e4-11954cd5763c", 00:09:00.279 "is_configured": false, 00:09:00.279 "data_offset": 0, 00:09:00.279 "data_size": 63488 00:09:00.279 }, 00:09:00.279 { 00:09:00.279 "name": "BaseBdev4", 00:09:00.279 "uuid": "a02fec7e-e8b9-48eb-845b-714142a36006", 00:09:00.279 "is_configured": true, 00:09:00.279 "data_offset": 2048, 00:09:00.279 "data_size": 63488 00:09:00.279 } 00:09:00.279 ] 00:09:00.279 }' 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.279 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.537 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:00.537 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.537 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.537 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.537 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.537 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:00.537 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:00.537 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.537 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.797 [2024-11-26 19:49:51.472631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.797 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.797 "name": "Existed_Raid", 00:09:00.797 "uuid": "9c729a08-1831-433d-9d86-a6dbd14fe150", 00:09:00.797 "strip_size_kb": 0, 00:09:00.797 "state": "configuring", 00:09:00.797 "raid_level": "raid1", 00:09:00.797 "superblock": true, 00:09:00.797 "num_base_bdevs": 4, 00:09:00.797 "num_base_bdevs_discovered": 3, 00:09:00.797 "num_base_bdevs_operational": 4, 00:09:00.797 "base_bdevs_list": [ 00:09:00.797 { 00:09:00.797 "name": "BaseBdev1", 00:09:00.797 "uuid": "d3b10395-cb2b-4c70-894c-818b4c5c19e1", 00:09:00.797 "is_configured": true, 00:09:00.797 "data_offset": 2048, 00:09:00.797 "data_size": 63488 00:09:00.797 }, 00:09:00.797 { 00:09:00.797 "name": null, 00:09:00.797 "uuid": "c44bae53-c65a-4b3c-993e-788f39bab30f", 00:09:00.797 "is_configured": false, 00:09:00.797 "data_offset": 0, 00:09:00.797 "data_size": 63488 00:09:00.797 }, 00:09:00.797 { 00:09:00.797 "name": "BaseBdev3", 00:09:00.797 "uuid": "a201447e-578c-454e-a6e4-11954cd5763c", 00:09:00.797 "is_configured": true, 00:09:00.797 "data_offset": 2048, 00:09:00.797 "data_size": 63488 00:09:00.797 }, 00:09:00.797 { 00:09:00.797 "name": "BaseBdev4", 00:09:00.797 "uuid": "a02fec7e-e8b9-48eb-845b-714142a36006", 00:09:00.797 "is_configured": true, 00:09:00.797 "data_offset": 2048, 00:09:00.797 "data_size": 63488 00:09:00.797 } 00:09:00.797 ] 00:09:00.797 }' 00:09:00.798 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.798 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.057 [2024-11-26 19:49:51.820701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.057 "name": "Existed_Raid", 00:09:01.057 "uuid": "9c729a08-1831-433d-9d86-a6dbd14fe150", 00:09:01.057 "strip_size_kb": 0, 00:09:01.057 "state": "configuring", 00:09:01.057 "raid_level": "raid1", 00:09:01.057 "superblock": true, 00:09:01.057 "num_base_bdevs": 4, 00:09:01.057 "num_base_bdevs_discovered": 2, 00:09:01.057 "num_base_bdevs_operational": 4, 00:09:01.057 "base_bdevs_list": [ 00:09:01.057 { 00:09:01.057 "name": null, 00:09:01.057 "uuid": "d3b10395-cb2b-4c70-894c-818b4c5c19e1", 00:09:01.057 "is_configured": false, 00:09:01.057 "data_offset": 0, 00:09:01.057 "data_size": 63488 00:09:01.057 }, 00:09:01.057 { 00:09:01.057 "name": null, 00:09:01.057 "uuid": "c44bae53-c65a-4b3c-993e-788f39bab30f", 00:09:01.057 "is_configured": false, 00:09:01.057 "data_offset": 0, 00:09:01.057 "data_size": 63488 00:09:01.057 }, 00:09:01.057 { 00:09:01.057 "name": "BaseBdev3", 00:09:01.057 "uuid": "a201447e-578c-454e-a6e4-11954cd5763c", 00:09:01.057 "is_configured": true, 00:09:01.057 "data_offset": 2048, 00:09:01.057 "data_size": 63488 00:09:01.057 }, 00:09:01.057 { 00:09:01.057 "name": "BaseBdev4", 00:09:01.057 "uuid": "a02fec7e-e8b9-48eb-845b-714142a36006", 00:09:01.057 "is_configured": true, 00:09:01.057 "data_offset": 2048, 00:09:01.057 "data_size": 63488 00:09:01.057 } 00:09:01.057 ] 00:09:01.057 }' 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.057 19:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.316 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:01.316 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.316 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.316 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.575 [2024-11-26 19:49:52.285707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.575 "name": "Existed_Raid", 00:09:01.575 "uuid": "9c729a08-1831-433d-9d86-a6dbd14fe150", 00:09:01.575 "strip_size_kb": 0, 00:09:01.575 "state": "configuring", 00:09:01.575 "raid_level": "raid1", 00:09:01.575 "superblock": true, 00:09:01.575 "num_base_bdevs": 4, 00:09:01.575 "num_base_bdevs_discovered": 3, 00:09:01.575 "num_base_bdevs_operational": 4, 00:09:01.575 "base_bdevs_list": [ 00:09:01.575 { 00:09:01.575 "name": null, 00:09:01.575 "uuid": "d3b10395-cb2b-4c70-894c-818b4c5c19e1", 00:09:01.575 "is_configured": false, 00:09:01.575 "data_offset": 0, 00:09:01.575 "data_size": 63488 00:09:01.575 }, 00:09:01.575 { 00:09:01.575 "name": "BaseBdev2", 00:09:01.575 "uuid": "c44bae53-c65a-4b3c-993e-788f39bab30f", 00:09:01.575 "is_configured": true, 00:09:01.575 "data_offset": 2048, 00:09:01.575 "data_size": 63488 00:09:01.575 }, 00:09:01.575 { 00:09:01.575 "name": "BaseBdev3", 00:09:01.575 "uuid": "a201447e-578c-454e-a6e4-11954cd5763c", 00:09:01.575 "is_configured": true, 00:09:01.575 "data_offset": 2048, 00:09:01.575 "data_size": 63488 00:09:01.575 }, 00:09:01.575 { 00:09:01.575 "name": "BaseBdev4", 00:09:01.575 "uuid": "a02fec7e-e8b9-48eb-845b-714142a36006", 00:09:01.575 "is_configured": true, 00:09:01.575 "data_offset": 2048, 00:09:01.575 "data_size": 63488 00:09:01.575 } 00:09:01.575 ] 00:09:01.575 }' 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.575 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d3b10395-cb2b-4c70-894c-818b4c5c19e1 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.834 [2024-11-26 19:49:52.689690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:01.834 [2024-11-26 19:49:52.689890] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:01.834 [2024-11-26 19:49:52.689904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:01.834 NewBaseBdev 00:09:01.834 [2024-11-26 19:49:52.690127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:01.834 [2024-11-26 19:49:52.690245] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:01.834 [2024-11-26 19:49:52.690252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:01.834 [2024-11-26 19:49:52.690371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.834 [ 00:09:01.834 { 00:09:01.834 "name": "NewBaseBdev", 00:09:01.834 "aliases": [ 00:09:01.834 "d3b10395-cb2b-4c70-894c-818b4c5c19e1" 00:09:01.834 ], 00:09:01.834 "product_name": "Malloc disk", 00:09:01.834 "block_size": 512, 00:09:01.834 "num_blocks": 65536, 00:09:01.834 "uuid": "d3b10395-cb2b-4c70-894c-818b4c5c19e1", 00:09:01.834 "assigned_rate_limits": { 00:09:01.834 "rw_ios_per_sec": 0, 00:09:01.834 "rw_mbytes_per_sec": 0, 00:09:01.834 "r_mbytes_per_sec": 0, 00:09:01.834 "w_mbytes_per_sec": 0 00:09:01.834 }, 00:09:01.834 "claimed": true, 00:09:01.834 "claim_type": "exclusive_write", 00:09:01.834 "zoned": false, 00:09:01.834 "supported_io_types": { 00:09:01.834 "read": true, 00:09:01.834 "write": true, 00:09:01.834 "unmap": true, 00:09:01.834 "flush": true, 00:09:01.834 "reset": true, 00:09:01.834 "nvme_admin": false, 00:09:01.834 "nvme_io": false, 00:09:01.834 "nvme_io_md": false, 00:09:01.834 "write_zeroes": true, 00:09:01.834 "zcopy": true, 00:09:01.834 "get_zone_info": false, 00:09:01.834 "zone_management": false, 00:09:01.834 "zone_append": false, 00:09:01.834 "compare": false, 00:09:01.834 "compare_and_write": false, 00:09:01.834 "abort": true, 00:09:01.834 "seek_hole": false, 00:09:01.834 "seek_data": false, 00:09:01.834 "copy": true, 00:09:01.834 "nvme_iov_md": false 00:09:01.834 }, 00:09:01.834 "memory_domains": [ 00:09:01.834 { 00:09:01.834 "dma_device_id": "system", 00:09:01.834 "dma_device_type": 1 00:09:01.834 }, 00:09:01.834 { 00:09:01.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.834 "dma_device_type": 2 00:09:01.834 } 00:09:01.834 ], 00:09:01.834 "driver_specific": {} 00:09:01.834 } 00:09:01.834 ] 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.834 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.834 "name": "Existed_Raid", 00:09:01.834 "uuid": "9c729a08-1831-433d-9d86-a6dbd14fe150", 00:09:01.834 "strip_size_kb": 0, 00:09:01.834 "state": "online", 00:09:01.834 "raid_level": "raid1", 00:09:01.834 "superblock": true, 00:09:01.834 "num_base_bdevs": 4, 00:09:01.834 "num_base_bdevs_discovered": 4, 00:09:01.834 "num_base_bdevs_operational": 4, 00:09:01.834 "base_bdevs_list": [ 00:09:01.834 { 00:09:01.834 "name": "NewBaseBdev", 00:09:01.834 "uuid": "d3b10395-cb2b-4c70-894c-818b4c5c19e1", 00:09:01.834 "is_configured": true, 00:09:01.834 "data_offset": 2048, 00:09:01.834 "data_size": 63488 00:09:01.834 }, 00:09:01.835 { 00:09:01.835 "name": "BaseBdev2", 00:09:01.835 "uuid": "c44bae53-c65a-4b3c-993e-788f39bab30f", 00:09:01.835 "is_configured": true, 00:09:01.835 "data_offset": 2048, 00:09:01.835 "data_size": 63488 00:09:01.835 }, 00:09:01.835 { 00:09:01.835 "name": "BaseBdev3", 00:09:01.835 "uuid": "a201447e-578c-454e-a6e4-11954cd5763c", 00:09:01.835 "is_configured": true, 00:09:01.835 "data_offset": 2048, 00:09:01.835 "data_size": 63488 00:09:01.835 }, 00:09:01.835 { 00:09:01.835 "name": "BaseBdev4", 00:09:01.835 "uuid": "a02fec7e-e8b9-48eb-845b-714142a36006", 00:09:01.835 "is_configured": true, 00:09:01.835 "data_offset": 2048, 00:09:01.835 "data_size": 63488 00:09:01.835 } 00:09:01.835 ] 00:09:01.835 }' 00:09:01.835 19:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.835 19:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.093 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:02.093 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:02.093 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.093 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.093 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.093 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.093 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.093 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:02.093 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.093 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.093 [2024-11-26 19:49:53.022110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.352 "name": "Existed_Raid", 00:09:02.352 "aliases": [ 00:09:02.352 "9c729a08-1831-433d-9d86-a6dbd14fe150" 00:09:02.352 ], 00:09:02.352 "product_name": "Raid Volume", 00:09:02.352 "block_size": 512, 00:09:02.352 "num_blocks": 63488, 00:09:02.352 "uuid": "9c729a08-1831-433d-9d86-a6dbd14fe150", 00:09:02.352 "assigned_rate_limits": { 00:09:02.352 "rw_ios_per_sec": 0, 00:09:02.352 "rw_mbytes_per_sec": 0, 00:09:02.352 "r_mbytes_per_sec": 0, 00:09:02.352 "w_mbytes_per_sec": 0 00:09:02.352 }, 00:09:02.352 "claimed": false, 00:09:02.352 "zoned": false, 00:09:02.352 "supported_io_types": { 00:09:02.352 "read": true, 00:09:02.352 "write": true, 00:09:02.352 "unmap": false, 00:09:02.352 "flush": false, 00:09:02.352 "reset": true, 00:09:02.352 "nvme_admin": false, 00:09:02.352 "nvme_io": false, 00:09:02.352 "nvme_io_md": false, 00:09:02.352 "write_zeroes": true, 00:09:02.352 "zcopy": false, 00:09:02.352 "get_zone_info": false, 00:09:02.352 "zone_management": false, 00:09:02.352 "zone_append": false, 00:09:02.352 "compare": false, 00:09:02.352 "compare_and_write": false, 00:09:02.352 "abort": false, 00:09:02.352 "seek_hole": false, 00:09:02.352 "seek_data": false, 00:09:02.352 "copy": false, 00:09:02.352 "nvme_iov_md": false 00:09:02.352 }, 00:09:02.352 "memory_domains": [ 00:09:02.352 { 00:09:02.352 "dma_device_id": "system", 00:09:02.352 "dma_device_type": 1 00:09:02.352 }, 00:09:02.352 { 00:09:02.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.352 "dma_device_type": 2 00:09:02.352 }, 00:09:02.352 { 00:09:02.352 "dma_device_id": "system", 00:09:02.352 "dma_device_type": 1 00:09:02.352 }, 00:09:02.352 { 00:09:02.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.352 "dma_device_type": 2 00:09:02.352 }, 00:09:02.352 { 00:09:02.352 "dma_device_id": "system", 00:09:02.352 "dma_device_type": 1 00:09:02.352 }, 00:09:02.352 { 00:09:02.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.352 "dma_device_type": 2 00:09:02.352 }, 00:09:02.352 { 00:09:02.352 "dma_device_id": "system", 00:09:02.352 "dma_device_type": 1 00:09:02.352 }, 00:09:02.352 { 00:09:02.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.352 "dma_device_type": 2 00:09:02.352 } 00:09:02.352 ], 00:09:02.352 "driver_specific": { 00:09:02.352 "raid": { 00:09:02.352 "uuid": "9c729a08-1831-433d-9d86-a6dbd14fe150", 00:09:02.352 "strip_size_kb": 0, 00:09:02.352 "state": "online", 00:09:02.352 "raid_level": "raid1", 00:09:02.352 "superblock": true, 00:09:02.352 "num_base_bdevs": 4, 00:09:02.352 "num_base_bdevs_discovered": 4, 00:09:02.352 "num_base_bdevs_operational": 4, 00:09:02.352 "base_bdevs_list": [ 00:09:02.352 { 00:09:02.352 "name": "NewBaseBdev", 00:09:02.352 "uuid": "d3b10395-cb2b-4c70-894c-818b4c5c19e1", 00:09:02.352 "is_configured": true, 00:09:02.352 "data_offset": 2048, 00:09:02.352 "data_size": 63488 00:09:02.352 }, 00:09:02.352 { 00:09:02.352 "name": "BaseBdev2", 00:09:02.352 "uuid": "c44bae53-c65a-4b3c-993e-788f39bab30f", 00:09:02.352 "is_configured": true, 00:09:02.352 "data_offset": 2048, 00:09:02.352 "data_size": 63488 00:09:02.352 }, 00:09:02.352 { 00:09:02.352 "name": "BaseBdev3", 00:09:02.352 "uuid": "a201447e-578c-454e-a6e4-11954cd5763c", 00:09:02.352 "is_configured": true, 00:09:02.352 "data_offset": 2048, 00:09:02.352 "data_size": 63488 00:09:02.352 }, 00:09:02.352 { 00:09:02.352 "name": "BaseBdev4", 00:09:02.352 "uuid": "a02fec7e-e8b9-48eb-845b-714142a36006", 00:09:02.352 "is_configured": true, 00:09:02.352 "data_offset": 2048, 00:09:02.352 "data_size": 63488 00:09:02.352 } 00:09:02.352 ] 00:09:02.352 } 00:09:02.352 } 00:09:02.352 }' 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:02.352 BaseBdev2 00:09:02.352 BaseBdev3 00:09:02.352 BaseBdev4' 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.352 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.353 [2024-11-26 19:49:53.237805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.353 [2024-11-26 19:49:53.237829] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.353 [2024-11-26 19:49:53.237898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.353 [2024-11-26 19:49:53.238158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.353 [2024-11-26 19:49:53.238169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71905 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71905 ']' 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71905 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71905 00:09:02.353 killing process with pid 71905 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71905' 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71905 00:09:02.353 [2024-11-26 19:49:53.265973] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.353 19:49:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71905 00:09:02.611 [2024-11-26 19:49:53.468572] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.176 19:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:03.176 00:09:03.176 real 0m8.286s 00:09:03.176 user 0m13.311s 00:09:03.176 sys 0m1.386s 00:09:03.176 ************************************ 00:09:03.176 END TEST raid_state_function_test_sb 00:09:03.176 ************************************ 00:09:03.176 19:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.176 19:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.435 19:49:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:09:03.435 19:49:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:03.435 19:49:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.435 19:49:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.435 ************************************ 00:09:03.435 START TEST raid_superblock_test 00:09:03.435 ************************************ 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:03.435 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:03.436 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72541 00:09:03.436 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72541 00:09:03.436 19:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72541 ']' 00:09:03.436 19:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.436 19:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.436 19:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.436 19:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.436 19:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.436 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:03.436 [2024-11-26 19:49:54.188640] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:09:03.436 [2024-11-26 19:49:54.188749] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72541 ] 00:09:03.436 [2024-11-26 19:49:54.341403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.697 [2024-11-26 19:49:54.441190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.697 [2024-11-26 19:49:54.560855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.697 [2024-11-26 19:49:54.560915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.265 19:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.265 19:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:04.265 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:04.265 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.265 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:04.265 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:04.265 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:04.265 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:04.265 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:04.265 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:04.265 19:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:04.265 19:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.265 19:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.265 malloc1 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.265 [2024-11-26 19:49:55.031713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:04.265 [2024-11-26 19:49:55.031865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.265 [2024-11-26 19:49:55.031902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:04.265 [2024-11-26 19:49:55.032102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.265 [2024-11-26 19:49:55.034071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.265 [2024-11-26 19:49:55.034170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:04.265 pt1 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.265 malloc2 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.265 [2024-11-26 19:49:55.069451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:04.265 [2024-11-26 19:49:55.069503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.265 [2024-11-26 19:49:55.069525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:04.265 [2024-11-26 19:49:55.069532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.265 [2024-11-26 19:49:55.071447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.265 [2024-11-26 19:49:55.071476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:04.265 pt2 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.265 malloc3 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.265 [2024-11-26 19:49:55.115321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:04.265 [2024-11-26 19:49:55.115388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.265 [2024-11-26 19:49:55.115410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:04.265 [2024-11-26 19:49:55.115418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.265 [2024-11-26 19:49:55.117321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.265 [2024-11-26 19:49:55.117459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:04.265 pt3 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:04.265 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.266 malloc4 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.266 [2024-11-26 19:49:55.148868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:04.266 [2024-11-26 19:49:55.149003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.266 [2024-11-26 19:49:55.149023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:04.266 [2024-11-26 19:49:55.149031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.266 [2024-11-26 19:49:55.150874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.266 [2024-11-26 19:49:55.150898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:04.266 pt4 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.266 [2024-11-26 19:49:55.156892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:04.266 [2024-11-26 19:49:55.158509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:04.266 [2024-11-26 19:49:55.158563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:04.266 [2024-11-26 19:49:55.158613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:04.266 [2024-11-26 19:49:55.158773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:04.266 [2024-11-26 19:49:55.158786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:04.266 [2024-11-26 19:49:55.159011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:04.266 [2024-11-26 19:49:55.159143] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:04.266 [2024-11-26 19:49:55.159154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:04.266 [2024-11-26 19:49:55.159273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.266 "name": "raid_bdev1", 00:09:04.266 "uuid": "cc5731d3-4c46-42b7-a647-d5f1b72cb25a", 00:09:04.266 "strip_size_kb": 0, 00:09:04.266 "state": "online", 00:09:04.266 "raid_level": "raid1", 00:09:04.266 "superblock": true, 00:09:04.266 "num_base_bdevs": 4, 00:09:04.266 "num_base_bdevs_discovered": 4, 00:09:04.266 "num_base_bdevs_operational": 4, 00:09:04.266 "base_bdevs_list": [ 00:09:04.266 { 00:09:04.266 "name": "pt1", 00:09:04.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.266 "is_configured": true, 00:09:04.266 "data_offset": 2048, 00:09:04.266 "data_size": 63488 00:09:04.266 }, 00:09:04.266 { 00:09:04.266 "name": "pt2", 00:09:04.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.266 "is_configured": true, 00:09:04.266 "data_offset": 2048, 00:09:04.266 "data_size": 63488 00:09:04.266 }, 00:09:04.266 { 00:09:04.266 "name": "pt3", 00:09:04.266 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.266 "is_configured": true, 00:09:04.266 "data_offset": 2048, 00:09:04.266 "data_size": 63488 00:09:04.266 }, 00:09:04.266 { 00:09:04.266 "name": "pt4", 00:09:04.266 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:04.266 "is_configured": true, 00:09:04.266 "data_offset": 2048, 00:09:04.266 "data_size": 63488 00:09:04.266 } 00:09:04.266 ] 00:09:04.266 }' 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.266 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.525 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:04.525 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:04.525 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:04.525 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:04.525 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:04.525 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:04.525 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.525 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:04.525 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.525 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.784 [2024-11-26 19:49:55.461261] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:04.784 "name": "raid_bdev1", 00:09:04.784 "aliases": [ 00:09:04.784 "cc5731d3-4c46-42b7-a647-d5f1b72cb25a" 00:09:04.784 ], 00:09:04.784 "product_name": "Raid Volume", 00:09:04.784 "block_size": 512, 00:09:04.784 "num_blocks": 63488, 00:09:04.784 "uuid": "cc5731d3-4c46-42b7-a647-d5f1b72cb25a", 00:09:04.784 "assigned_rate_limits": { 00:09:04.784 "rw_ios_per_sec": 0, 00:09:04.784 "rw_mbytes_per_sec": 0, 00:09:04.784 "r_mbytes_per_sec": 0, 00:09:04.784 "w_mbytes_per_sec": 0 00:09:04.784 }, 00:09:04.784 "claimed": false, 00:09:04.784 "zoned": false, 00:09:04.784 "supported_io_types": { 00:09:04.784 "read": true, 00:09:04.784 "write": true, 00:09:04.784 "unmap": false, 00:09:04.784 "flush": false, 00:09:04.784 "reset": true, 00:09:04.784 "nvme_admin": false, 00:09:04.784 "nvme_io": false, 00:09:04.784 "nvme_io_md": false, 00:09:04.784 "write_zeroes": true, 00:09:04.784 "zcopy": false, 00:09:04.784 "get_zone_info": false, 00:09:04.784 "zone_management": false, 00:09:04.784 "zone_append": false, 00:09:04.784 "compare": false, 00:09:04.784 "compare_and_write": false, 00:09:04.784 "abort": false, 00:09:04.784 "seek_hole": false, 00:09:04.784 "seek_data": false, 00:09:04.784 "copy": false, 00:09:04.784 "nvme_iov_md": false 00:09:04.784 }, 00:09:04.784 "memory_domains": [ 00:09:04.784 { 00:09:04.784 "dma_device_id": "system", 00:09:04.784 "dma_device_type": 1 00:09:04.784 }, 00:09:04.784 { 00:09:04.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.784 "dma_device_type": 2 00:09:04.784 }, 00:09:04.784 { 00:09:04.784 "dma_device_id": "system", 00:09:04.784 "dma_device_type": 1 00:09:04.784 }, 00:09:04.784 { 00:09:04.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.784 "dma_device_type": 2 00:09:04.784 }, 00:09:04.784 { 00:09:04.784 "dma_device_id": "system", 00:09:04.784 "dma_device_type": 1 00:09:04.784 }, 00:09:04.784 { 00:09:04.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.784 "dma_device_type": 2 00:09:04.784 }, 00:09:04.784 { 00:09:04.784 "dma_device_id": "system", 00:09:04.784 "dma_device_type": 1 00:09:04.784 }, 00:09:04.784 { 00:09:04.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.784 "dma_device_type": 2 00:09:04.784 } 00:09:04.784 ], 00:09:04.784 "driver_specific": { 00:09:04.784 "raid": { 00:09:04.784 "uuid": "cc5731d3-4c46-42b7-a647-d5f1b72cb25a", 00:09:04.784 "strip_size_kb": 0, 00:09:04.784 "state": "online", 00:09:04.784 "raid_level": "raid1", 00:09:04.784 "superblock": true, 00:09:04.784 "num_base_bdevs": 4, 00:09:04.784 "num_base_bdevs_discovered": 4, 00:09:04.784 "num_base_bdevs_operational": 4, 00:09:04.784 "base_bdevs_list": [ 00:09:04.784 { 00:09:04.784 "name": "pt1", 00:09:04.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.784 "is_configured": true, 00:09:04.784 "data_offset": 2048, 00:09:04.784 "data_size": 63488 00:09:04.784 }, 00:09:04.784 { 00:09:04.784 "name": "pt2", 00:09:04.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.784 "is_configured": true, 00:09:04.784 "data_offset": 2048, 00:09:04.784 "data_size": 63488 00:09:04.784 }, 00:09:04.784 { 00:09:04.784 "name": "pt3", 00:09:04.784 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.784 "is_configured": true, 00:09:04.784 "data_offset": 2048, 00:09:04.784 "data_size": 63488 00:09:04.784 }, 00:09:04.784 { 00:09:04.784 "name": "pt4", 00:09:04.784 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:04.784 "is_configured": true, 00:09:04.784 "data_offset": 2048, 00:09:04.784 "data_size": 63488 00:09:04.784 } 00:09:04.784 ] 00:09:04.784 } 00:09:04.784 } 00:09:04.784 }' 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:04.784 pt2 00:09:04.784 pt3 00:09:04.784 pt4' 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.784 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.785 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.785 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.785 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:04.785 [2024-11-26 19:49:55.697251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.785 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cc5731d3-4c46-42b7-a647-d5f1b72cb25a 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cc5731d3-4c46-42b7-a647-d5f1b72cb25a ']' 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.044 [2024-11-26 19:49:55.724974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.044 [2024-11-26 19:49:55.724996] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.044 [2024-11-26 19:49:55.725073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.044 [2024-11-26 19:49:55.725160] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.044 [2024-11-26 19:49:55.725174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.044 [2024-11-26 19:49:55.841014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:05.044 [2024-11-26 19:49:55.842680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:05.044 [2024-11-26 19:49:55.842814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:05.044 [2024-11-26 19:49:55.842854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:05.044 [2024-11-26 19:49:55.842901] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:05.044 [2024-11-26 19:49:55.842955] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:05.044 [2024-11-26 19:49:55.842971] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:05.044 [2024-11-26 19:49:55.842987] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:05.044 [2024-11-26 19:49:55.842997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.044 [2024-11-26 19:49:55.843007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:05.044 request: 00:09:05.044 { 00:09:05.044 "name": "raid_bdev1", 00:09:05.044 "raid_level": "raid1", 00:09:05.044 "base_bdevs": [ 00:09:05.044 "malloc1", 00:09:05.044 "malloc2", 00:09:05.044 "malloc3", 00:09:05.044 "malloc4" 00:09:05.044 ], 00:09:05.044 "superblock": false, 00:09:05.044 "method": "bdev_raid_create", 00:09:05.044 "req_id": 1 00:09:05.044 } 00:09:05.044 Got JSON-RPC error response 00:09:05.044 response: 00:09:05.044 { 00:09:05.044 "code": -17, 00:09:05.044 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:05.044 } 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.044 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.044 [2024-11-26 19:49:55.877015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:05.044 [2024-11-26 19:49:55.877070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.044 [2024-11-26 19:49:55.877085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:05.045 [2024-11-26 19:49:55.877094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.045 [2024-11-26 19:49:55.879091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.045 [2024-11-26 19:49:55.879126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:05.045 [2024-11-26 19:49:55.879198] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:05.045 [2024-11-26 19:49:55.879246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:05.045 pt1 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.045 "name": "raid_bdev1", 00:09:05.045 "uuid": "cc5731d3-4c46-42b7-a647-d5f1b72cb25a", 00:09:05.045 "strip_size_kb": 0, 00:09:05.045 "state": "configuring", 00:09:05.045 "raid_level": "raid1", 00:09:05.045 "superblock": true, 00:09:05.045 "num_base_bdevs": 4, 00:09:05.045 "num_base_bdevs_discovered": 1, 00:09:05.045 "num_base_bdevs_operational": 4, 00:09:05.045 "base_bdevs_list": [ 00:09:05.045 { 00:09:05.045 "name": "pt1", 00:09:05.045 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.045 "is_configured": true, 00:09:05.045 "data_offset": 2048, 00:09:05.045 "data_size": 63488 00:09:05.045 }, 00:09:05.045 { 00:09:05.045 "name": null, 00:09:05.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.045 "is_configured": false, 00:09:05.045 "data_offset": 2048, 00:09:05.045 "data_size": 63488 00:09:05.045 }, 00:09:05.045 { 00:09:05.045 "name": null, 00:09:05.045 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.045 "is_configured": false, 00:09:05.045 "data_offset": 2048, 00:09:05.045 "data_size": 63488 00:09:05.045 }, 00:09:05.045 { 00:09:05.045 "name": null, 00:09:05.045 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:05.045 "is_configured": false, 00:09:05.045 "data_offset": 2048, 00:09:05.045 "data_size": 63488 00:09:05.045 } 00:09:05.045 ] 00:09:05.045 }' 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.045 19:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.303 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:05.303 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:05.303 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.303 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.303 [2024-11-26 19:49:56.197103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:05.303 [2024-11-26 19:49:56.197282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.304 [2024-11-26 19:49:56.197305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:05.304 [2024-11-26 19:49:56.197315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.304 [2024-11-26 19:49:56.197739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.304 [2024-11-26 19:49:56.197753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:05.304 [2024-11-26 19:49:56.197829] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:05.304 [2024-11-26 19:49:56.197851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:05.304 pt2 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.304 [2024-11-26 19:49:56.205104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.304 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.562 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.562 "name": "raid_bdev1", 00:09:05.562 "uuid": "cc5731d3-4c46-42b7-a647-d5f1b72cb25a", 00:09:05.562 "strip_size_kb": 0, 00:09:05.562 "state": "configuring", 00:09:05.562 "raid_level": "raid1", 00:09:05.562 "superblock": true, 00:09:05.562 "num_base_bdevs": 4, 00:09:05.562 "num_base_bdevs_discovered": 1, 00:09:05.562 "num_base_bdevs_operational": 4, 00:09:05.562 "base_bdevs_list": [ 00:09:05.562 { 00:09:05.562 "name": "pt1", 00:09:05.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.562 "is_configured": true, 00:09:05.562 "data_offset": 2048, 00:09:05.562 "data_size": 63488 00:09:05.562 }, 00:09:05.562 { 00:09:05.562 "name": null, 00:09:05.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.562 "is_configured": false, 00:09:05.562 "data_offset": 0, 00:09:05.562 "data_size": 63488 00:09:05.562 }, 00:09:05.562 { 00:09:05.562 "name": null, 00:09:05.562 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.562 "is_configured": false, 00:09:05.562 "data_offset": 2048, 00:09:05.562 "data_size": 63488 00:09:05.562 }, 00:09:05.562 { 00:09:05.562 "name": null, 00:09:05.562 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:05.562 "is_configured": false, 00:09:05.562 "data_offset": 2048, 00:09:05.562 "data_size": 63488 00:09:05.562 } 00:09:05.562 ] 00:09:05.562 }' 00:09:05.562 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.562 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.821 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:05.821 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.821 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:05.821 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.821 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.821 [2024-11-26 19:49:56.529156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:05.821 [2024-11-26 19:49:56.529314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.821 [2024-11-26 19:49:56.529360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:05.821 [2024-11-26 19:49:56.529550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.821 [2024-11-26 19:49:56.529995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.822 [2024-11-26 19:49:56.530086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:05.822 [2024-11-26 19:49:56.530211] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:05.822 [2024-11-26 19:49:56.530277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:05.822 pt2 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.822 [2024-11-26 19:49:56.537121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:05.822 [2024-11-26 19:49:56.537156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.822 [2024-11-26 19:49:56.537169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:05.822 [2024-11-26 19:49:56.537176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.822 [2024-11-26 19:49:56.537492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.822 [2024-11-26 19:49:56.537508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:05.822 [2024-11-26 19:49:56.537554] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:05.822 [2024-11-26 19:49:56.537568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:05.822 pt3 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.822 [2024-11-26 19:49:56.545096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:05.822 [2024-11-26 19:49:56.545127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.822 [2024-11-26 19:49:56.545138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:05.822 [2024-11-26 19:49:56.545145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.822 [2024-11-26 19:49:56.545461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.822 [2024-11-26 19:49:56.545477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:05.822 [2024-11-26 19:49:56.545520] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:05.822 [2024-11-26 19:49:56.545536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:05.822 [2024-11-26 19:49:56.545647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:05.822 [2024-11-26 19:49:56.545658] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:05.822 [2024-11-26 19:49:56.545857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:05.822 [2024-11-26 19:49:56.545970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:05.822 [2024-11-26 19:49:56.545979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:05.822 [2024-11-26 19:49:56.546077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.822 pt4 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.822 "name": "raid_bdev1", 00:09:05.822 "uuid": "cc5731d3-4c46-42b7-a647-d5f1b72cb25a", 00:09:05.822 "strip_size_kb": 0, 00:09:05.822 "state": "online", 00:09:05.822 "raid_level": "raid1", 00:09:05.822 "superblock": true, 00:09:05.822 "num_base_bdevs": 4, 00:09:05.822 "num_base_bdevs_discovered": 4, 00:09:05.822 "num_base_bdevs_operational": 4, 00:09:05.822 "base_bdevs_list": [ 00:09:05.822 { 00:09:05.822 "name": "pt1", 00:09:05.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.822 "is_configured": true, 00:09:05.822 "data_offset": 2048, 00:09:05.822 "data_size": 63488 00:09:05.822 }, 00:09:05.822 { 00:09:05.822 "name": "pt2", 00:09:05.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.822 "is_configured": true, 00:09:05.822 "data_offset": 2048, 00:09:05.822 "data_size": 63488 00:09:05.822 }, 00:09:05.822 { 00:09:05.822 "name": "pt3", 00:09:05.822 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.822 "is_configured": true, 00:09:05.822 "data_offset": 2048, 00:09:05.822 "data_size": 63488 00:09:05.822 }, 00:09:05.822 { 00:09:05.822 "name": "pt4", 00:09:05.822 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:05.822 "is_configured": true, 00:09:05.822 "data_offset": 2048, 00:09:05.822 "data_size": 63488 00:09:05.822 } 00:09:05.822 ] 00:09:05.822 }' 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.822 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.081 [2024-11-26 19:49:56.881538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.081 "name": "raid_bdev1", 00:09:06.081 "aliases": [ 00:09:06.081 "cc5731d3-4c46-42b7-a647-d5f1b72cb25a" 00:09:06.081 ], 00:09:06.081 "product_name": "Raid Volume", 00:09:06.081 "block_size": 512, 00:09:06.081 "num_blocks": 63488, 00:09:06.081 "uuid": "cc5731d3-4c46-42b7-a647-d5f1b72cb25a", 00:09:06.081 "assigned_rate_limits": { 00:09:06.081 "rw_ios_per_sec": 0, 00:09:06.081 "rw_mbytes_per_sec": 0, 00:09:06.081 "r_mbytes_per_sec": 0, 00:09:06.081 "w_mbytes_per_sec": 0 00:09:06.081 }, 00:09:06.081 "claimed": false, 00:09:06.081 "zoned": false, 00:09:06.081 "supported_io_types": { 00:09:06.081 "read": true, 00:09:06.081 "write": true, 00:09:06.081 "unmap": false, 00:09:06.081 "flush": false, 00:09:06.081 "reset": true, 00:09:06.081 "nvme_admin": false, 00:09:06.081 "nvme_io": false, 00:09:06.081 "nvme_io_md": false, 00:09:06.081 "write_zeroes": true, 00:09:06.081 "zcopy": false, 00:09:06.081 "get_zone_info": false, 00:09:06.081 "zone_management": false, 00:09:06.081 "zone_append": false, 00:09:06.081 "compare": false, 00:09:06.081 "compare_and_write": false, 00:09:06.081 "abort": false, 00:09:06.081 "seek_hole": false, 00:09:06.081 "seek_data": false, 00:09:06.081 "copy": false, 00:09:06.081 "nvme_iov_md": false 00:09:06.081 }, 00:09:06.081 "memory_domains": [ 00:09:06.081 { 00:09:06.081 "dma_device_id": "system", 00:09:06.081 "dma_device_type": 1 00:09:06.081 }, 00:09:06.081 { 00:09:06.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.081 "dma_device_type": 2 00:09:06.081 }, 00:09:06.081 { 00:09:06.081 "dma_device_id": "system", 00:09:06.081 "dma_device_type": 1 00:09:06.081 }, 00:09:06.081 { 00:09:06.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.081 "dma_device_type": 2 00:09:06.081 }, 00:09:06.081 { 00:09:06.081 "dma_device_id": "system", 00:09:06.081 "dma_device_type": 1 00:09:06.081 }, 00:09:06.081 { 00:09:06.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.081 "dma_device_type": 2 00:09:06.081 }, 00:09:06.081 { 00:09:06.081 "dma_device_id": "system", 00:09:06.081 "dma_device_type": 1 00:09:06.081 }, 00:09:06.081 { 00:09:06.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.081 "dma_device_type": 2 00:09:06.081 } 00:09:06.081 ], 00:09:06.081 "driver_specific": { 00:09:06.081 "raid": { 00:09:06.081 "uuid": "cc5731d3-4c46-42b7-a647-d5f1b72cb25a", 00:09:06.081 "strip_size_kb": 0, 00:09:06.081 "state": "online", 00:09:06.081 "raid_level": "raid1", 00:09:06.081 "superblock": true, 00:09:06.081 "num_base_bdevs": 4, 00:09:06.081 "num_base_bdevs_discovered": 4, 00:09:06.081 "num_base_bdevs_operational": 4, 00:09:06.081 "base_bdevs_list": [ 00:09:06.081 { 00:09:06.081 "name": "pt1", 00:09:06.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:06.081 "is_configured": true, 00:09:06.081 "data_offset": 2048, 00:09:06.081 "data_size": 63488 00:09:06.081 }, 00:09:06.081 { 00:09:06.081 "name": "pt2", 00:09:06.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.081 "is_configured": true, 00:09:06.081 "data_offset": 2048, 00:09:06.081 "data_size": 63488 00:09:06.081 }, 00:09:06.081 { 00:09:06.081 "name": "pt3", 00:09:06.081 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:06.081 "is_configured": true, 00:09:06.081 "data_offset": 2048, 00:09:06.081 "data_size": 63488 00:09:06.081 }, 00:09:06.081 { 00:09:06.081 "name": "pt4", 00:09:06.081 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:06.081 "is_configured": true, 00:09:06.081 "data_offset": 2048, 00:09:06.081 "data_size": 63488 00:09:06.081 } 00:09:06.081 ] 00:09:06.081 } 00:09:06.081 } 00:09:06.081 }' 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:06.081 pt2 00:09:06.081 pt3 00:09:06.081 pt4' 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.081 19:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.081 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.081 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.081 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.081 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:06.081 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.081 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.081 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.340 [2024-11-26 19:49:57.121531] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cc5731d3-4c46-42b7-a647-d5f1b72cb25a '!=' cc5731d3-4c46-42b7-a647-d5f1b72cb25a ']' 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.340 [2024-11-26 19:49:57.149286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.340 "name": "raid_bdev1", 00:09:06.340 "uuid": "cc5731d3-4c46-42b7-a647-d5f1b72cb25a", 00:09:06.340 "strip_size_kb": 0, 00:09:06.340 "state": "online", 00:09:06.340 "raid_level": "raid1", 00:09:06.340 "superblock": true, 00:09:06.340 "num_base_bdevs": 4, 00:09:06.340 "num_base_bdevs_discovered": 3, 00:09:06.340 "num_base_bdevs_operational": 3, 00:09:06.340 "base_bdevs_list": [ 00:09:06.340 { 00:09:06.340 "name": null, 00:09:06.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.340 "is_configured": false, 00:09:06.340 "data_offset": 0, 00:09:06.340 "data_size": 63488 00:09:06.340 }, 00:09:06.340 { 00:09:06.340 "name": "pt2", 00:09:06.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.340 "is_configured": true, 00:09:06.340 "data_offset": 2048, 00:09:06.340 "data_size": 63488 00:09:06.340 }, 00:09:06.340 { 00:09:06.340 "name": "pt3", 00:09:06.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:06.340 "is_configured": true, 00:09:06.340 "data_offset": 2048, 00:09:06.340 "data_size": 63488 00:09:06.340 }, 00:09:06.340 { 00:09:06.340 "name": "pt4", 00:09:06.340 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:06.340 "is_configured": true, 00:09:06.340 "data_offset": 2048, 00:09:06.340 "data_size": 63488 00:09:06.340 } 00:09:06.340 ] 00:09:06.340 }' 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.340 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.598 [2024-11-26 19:49:57.485361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:06.598 [2024-11-26 19:49:57.485392] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.598 [2024-11-26 19:49:57.485472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.598 [2024-11-26 19:49:57.485552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.598 [2024-11-26 19:49:57.485561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.598 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.857 [2024-11-26 19:49:57.549298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:06.857 [2024-11-26 19:49:57.549356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.857 [2024-11-26 19:49:57.549374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:06.857 [2024-11-26 19:49:57.549382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.857 [2024-11-26 19:49:57.551398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.857 [2024-11-26 19:49:57.551515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:06.857 [2024-11-26 19:49:57.551599] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:06.857 [2024-11-26 19:49:57.551641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:06.857 pt2 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.857 "name": "raid_bdev1", 00:09:06.857 "uuid": "cc5731d3-4c46-42b7-a647-d5f1b72cb25a", 00:09:06.857 "strip_size_kb": 0, 00:09:06.857 "state": "configuring", 00:09:06.857 "raid_level": "raid1", 00:09:06.857 "superblock": true, 00:09:06.857 "num_base_bdevs": 4, 00:09:06.857 "num_base_bdevs_discovered": 1, 00:09:06.857 "num_base_bdevs_operational": 3, 00:09:06.857 "base_bdevs_list": [ 00:09:06.857 { 00:09:06.857 "name": null, 00:09:06.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.857 "is_configured": false, 00:09:06.857 "data_offset": 2048, 00:09:06.857 "data_size": 63488 00:09:06.857 }, 00:09:06.857 { 00:09:06.857 "name": "pt2", 00:09:06.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.857 "is_configured": true, 00:09:06.857 "data_offset": 2048, 00:09:06.857 "data_size": 63488 00:09:06.857 }, 00:09:06.857 { 00:09:06.857 "name": null, 00:09:06.857 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:06.857 "is_configured": false, 00:09:06.857 "data_offset": 2048, 00:09:06.857 "data_size": 63488 00:09:06.857 }, 00:09:06.857 { 00:09:06.857 "name": null, 00:09:06.857 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:06.857 "is_configured": false, 00:09:06.857 "data_offset": 2048, 00:09:06.857 "data_size": 63488 00:09:06.857 } 00:09:06.857 ] 00:09:06.857 }' 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.857 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.115 [2024-11-26 19:49:57.869401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:07.115 [2024-11-26 19:49:57.869461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.115 [2024-11-26 19:49:57.869481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:09:07.115 [2024-11-26 19:49:57.869489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.115 [2024-11-26 19:49:57.869897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.115 [2024-11-26 19:49:57.869908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:07.115 [2024-11-26 19:49:57.869980] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:07.115 [2024-11-26 19:49:57.869999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:07.115 pt3 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.115 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.116 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.116 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.116 "name": "raid_bdev1", 00:09:07.116 "uuid": "cc5731d3-4c46-42b7-a647-d5f1b72cb25a", 00:09:07.116 "strip_size_kb": 0, 00:09:07.116 "state": "configuring", 00:09:07.116 "raid_level": "raid1", 00:09:07.116 "superblock": true, 00:09:07.116 "num_base_bdevs": 4, 00:09:07.116 "num_base_bdevs_discovered": 2, 00:09:07.116 "num_base_bdevs_operational": 3, 00:09:07.116 "base_bdevs_list": [ 00:09:07.116 { 00:09:07.116 "name": null, 00:09:07.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.116 "is_configured": false, 00:09:07.116 "data_offset": 2048, 00:09:07.116 "data_size": 63488 00:09:07.116 }, 00:09:07.116 { 00:09:07.116 "name": "pt2", 00:09:07.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.116 "is_configured": true, 00:09:07.116 "data_offset": 2048, 00:09:07.116 "data_size": 63488 00:09:07.116 }, 00:09:07.116 { 00:09:07.116 "name": "pt3", 00:09:07.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.116 "is_configured": true, 00:09:07.116 "data_offset": 2048, 00:09:07.116 "data_size": 63488 00:09:07.116 }, 00:09:07.116 { 00:09:07.116 "name": null, 00:09:07.116 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:07.116 "is_configured": false, 00:09:07.116 "data_offset": 2048, 00:09:07.116 "data_size": 63488 00:09:07.116 } 00:09:07.116 ] 00:09:07.116 }' 00:09:07.116 19:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.116 19:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.373 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.374 [2024-11-26 19:49:58.185462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:07.374 [2024-11-26 19:49:58.185524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.374 [2024-11-26 19:49:58.185548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:09:07.374 [2024-11-26 19:49:58.185556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.374 [2024-11-26 19:49:58.185948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.374 [2024-11-26 19:49:58.185958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:07.374 [2024-11-26 19:49:58.186031] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:07.374 [2024-11-26 19:49:58.186050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:07.374 [2024-11-26 19:49:58.186161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:07.374 [2024-11-26 19:49:58.186169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:07.374 [2024-11-26 19:49:58.186385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:07.374 [2024-11-26 19:49:58.186501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:07.374 [2024-11-26 19:49:58.186541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:07.374 [2024-11-26 19:49:58.186657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.374 pt4 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.374 "name": "raid_bdev1", 00:09:07.374 "uuid": "cc5731d3-4c46-42b7-a647-d5f1b72cb25a", 00:09:07.374 "strip_size_kb": 0, 00:09:07.374 "state": "online", 00:09:07.374 "raid_level": "raid1", 00:09:07.374 "superblock": true, 00:09:07.374 "num_base_bdevs": 4, 00:09:07.374 "num_base_bdevs_discovered": 3, 00:09:07.374 "num_base_bdevs_operational": 3, 00:09:07.374 "base_bdevs_list": [ 00:09:07.374 { 00:09:07.374 "name": null, 00:09:07.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.374 "is_configured": false, 00:09:07.374 "data_offset": 2048, 00:09:07.374 "data_size": 63488 00:09:07.374 }, 00:09:07.374 { 00:09:07.374 "name": "pt2", 00:09:07.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.374 "is_configured": true, 00:09:07.374 "data_offset": 2048, 00:09:07.374 "data_size": 63488 00:09:07.374 }, 00:09:07.374 { 00:09:07.374 "name": "pt3", 00:09:07.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.374 "is_configured": true, 00:09:07.374 "data_offset": 2048, 00:09:07.374 "data_size": 63488 00:09:07.374 }, 00:09:07.374 { 00:09:07.374 "name": "pt4", 00:09:07.374 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:07.374 "is_configured": true, 00:09:07.374 "data_offset": 2048, 00:09:07.374 "data_size": 63488 00:09:07.374 } 00:09:07.374 ] 00:09:07.374 }' 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.374 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.632 [2024-11-26 19:49:58.481517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.632 [2024-11-26 19:49:58.481629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.632 [2024-11-26 19:49:58.481713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.632 [2024-11-26 19:49:58.481784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.632 [2024-11-26 19:49:58.481795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.632 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.633 [2024-11-26 19:49:58.533521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:07.633 [2024-11-26 19:49:58.533677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.633 [2024-11-26 19:49:58.533711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:09:07.633 [2024-11-26 19:49:58.533897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.633 [2024-11-26 19:49:58.535929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.633 [2024-11-26 19:49:58.536032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:07.633 [2024-11-26 19:49:58.536120] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:07.633 [2024-11-26 19:49:58.536161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:07.633 [2024-11-26 19:49:58.536276] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:07.633 [2024-11-26 19:49:58.536287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.633 [2024-11-26 19:49:58.536300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:07.633 [2024-11-26 19:49:58.536361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:07.633 [2024-11-26 19:49:58.536452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:07.633 pt1 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.633 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.891 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.891 "name": "raid_bdev1", 00:09:07.891 "uuid": "cc5731d3-4c46-42b7-a647-d5f1b72cb25a", 00:09:07.891 "strip_size_kb": 0, 00:09:07.891 "state": "configuring", 00:09:07.891 "raid_level": "raid1", 00:09:07.891 "superblock": true, 00:09:07.891 "num_base_bdevs": 4, 00:09:07.891 "num_base_bdevs_discovered": 2, 00:09:07.891 "num_base_bdevs_operational": 3, 00:09:07.891 "base_bdevs_list": [ 00:09:07.891 { 00:09:07.891 "name": null, 00:09:07.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.891 "is_configured": false, 00:09:07.891 "data_offset": 2048, 00:09:07.891 "data_size": 63488 00:09:07.891 }, 00:09:07.891 { 00:09:07.891 "name": "pt2", 00:09:07.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.891 "is_configured": true, 00:09:07.891 "data_offset": 2048, 00:09:07.891 "data_size": 63488 00:09:07.891 }, 00:09:07.891 { 00:09:07.891 "name": "pt3", 00:09:07.891 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.891 "is_configured": true, 00:09:07.891 "data_offset": 2048, 00:09:07.891 "data_size": 63488 00:09:07.891 }, 00:09:07.891 { 00:09:07.891 "name": null, 00:09:07.891 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:07.891 "is_configured": false, 00:09:07.891 "data_offset": 2048, 00:09:07.891 "data_size": 63488 00:09:07.891 } 00:09:07.891 ] 00:09:07.891 }' 00:09:07.891 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.891 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.149 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:08.149 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.149 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.149 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:08.149 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.149 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:08.149 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:08.149 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.149 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.149 [2024-11-26 19:49:58.897630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:08.149 [2024-11-26 19:49:58.897698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.149 [2024-11-26 19:49:58.897718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:09:08.149 [2024-11-26 19:49:58.897727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.149 [2024-11-26 19:49:58.898128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.149 [2024-11-26 19:49:58.898141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:08.149 [2024-11-26 19:49:58.898218] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:08.149 [2024-11-26 19:49:58.898239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:08.149 [2024-11-26 19:49:58.898366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:08.149 [2024-11-26 19:49:58.898374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:08.150 [2024-11-26 19:49:58.898600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:09:08.150 [2024-11-26 19:49:58.898718] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:08.150 [2024-11-26 19:49:58.898726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:08.150 [2024-11-26 19:49:58.898843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.150 pt4 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.150 "name": "raid_bdev1", 00:09:08.150 "uuid": "cc5731d3-4c46-42b7-a647-d5f1b72cb25a", 00:09:08.150 "strip_size_kb": 0, 00:09:08.150 "state": "online", 00:09:08.150 "raid_level": "raid1", 00:09:08.150 "superblock": true, 00:09:08.150 "num_base_bdevs": 4, 00:09:08.150 "num_base_bdevs_discovered": 3, 00:09:08.150 "num_base_bdevs_operational": 3, 00:09:08.150 "base_bdevs_list": [ 00:09:08.150 { 00:09:08.150 "name": null, 00:09:08.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.150 "is_configured": false, 00:09:08.150 "data_offset": 2048, 00:09:08.150 "data_size": 63488 00:09:08.150 }, 00:09:08.150 { 00:09:08.150 "name": "pt2", 00:09:08.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.150 "is_configured": true, 00:09:08.150 "data_offset": 2048, 00:09:08.150 "data_size": 63488 00:09:08.150 }, 00:09:08.150 { 00:09:08.150 "name": "pt3", 00:09:08.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:08.150 "is_configured": true, 00:09:08.150 "data_offset": 2048, 00:09:08.150 "data_size": 63488 00:09:08.150 }, 00:09:08.150 { 00:09:08.150 "name": "pt4", 00:09:08.150 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:08.150 "is_configured": true, 00:09:08.150 "data_offset": 2048, 00:09:08.150 "data_size": 63488 00:09:08.150 } 00:09:08.150 ] 00:09:08.150 }' 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.150 19:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.408 19:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:08.408 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.408 19:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:08.408 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.408 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.408 19:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:08.408 19:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:08.408 19:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:08.408 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.408 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.408 [2024-11-26 19:49:59.249931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.408 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.409 19:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' cc5731d3-4c46-42b7-a647-d5f1b72cb25a '!=' cc5731d3-4c46-42b7-a647-d5f1b72cb25a ']' 00:09:08.409 19:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72541 00:09:08.409 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72541 ']' 00:09:08.409 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72541 00:09:08.409 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:08.409 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.409 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72541 00:09:08.409 killing process with pid 72541 00:09:08.409 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.409 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.409 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72541' 00:09:08.409 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72541 00:09:08.409 [2024-11-26 19:49:59.291409] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.409 19:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72541 00:09:08.409 [2024-11-26 19:49:59.291498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.409 [2024-11-26 19:49:59.291573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.409 [2024-11-26 19:49:59.291584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:08.668 [2024-11-26 19:49:59.496489] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.234 19:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:09.234 00:09:09.234 real 0m5.994s 00:09:09.234 user 0m9.457s 00:09:09.234 sys 0m1.086s 00:09:09.234 19:50:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.234 19:50:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.234 ************************************ 00:09:09.234 END TEST raid_superblock_test 00:09:09.234 ************************************ 00:09:09.234 19:50:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:09:09.234 19:50:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:09.234 19:50:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.234 19:50:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.234 ************************************ 00:09:09.234 START TEST raid_read_error_test 00:09:09.234 ************************************ 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:09.234 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aJQb07AI8f 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73002 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73002 00:09:09.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73002 ']' 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.492 19:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.492 [2024-11-26 19:50:00.242272] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:09:09.492 [2024-11-26 19:50:00.242413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73002 ] 00:09:09.492 [2024-11-26 19:50:00.400127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.750 [2024-11-26 19:50:00.499926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.750 [2024-11-26 19:50:00.620814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.750 [2024-11-26 19:50:00.620848] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.316 BaseBdev1_malloc 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.316 true 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.316 [2024-11-26 19:50:01.129866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:10.316 [2024-11-26 19:50:01.130010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.316 [2024-11-26 19:50:01.130033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:10.316 [2024-11-26 19:50:01.130044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.316 [2024-11-26 19:50:01.131933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.316 [2024-11-26 19:50:01.131966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:10.316 BaseBdev1 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.316 BaseBdev2_malloc 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:10.316 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.317 true 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.317 [2024-11-26 19:50:01.171626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:10.317 [2024-11-26 19:50:01.171675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.317 [2024-11-26 19:50:01.171689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:10.317 [2024-11-26 19:50:01.171698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.317 [2024-11-26 19:50:01.173587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.317 [2024-11-26 19:50:01.173616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:10.317 BaseBdev2 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.317 BaseBdev3_malloc 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.317 true 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.317 [2024-11-26 19:50:01.223853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:10.317 [2024-11-26 19:50:01.223901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.317 [2024-11-26 19:50:01.223917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:10.317 [2024-11-26 19:50:01.223926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.317 [2024-11-26 19:50:01.225801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.317 [2024-11-26 19:50:01.225833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:10.317 BaseBdev3 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.317 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.575 BaseBdev4_malloc 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.575 true 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.575 [2024-11-26 19:50:01.265415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:10.575 [2024-11-26 19:50:01.265461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.575 [2024-11-26 19:50:01.265477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:10.575 [2024-11-26 19:50:01.265486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.575 [2024-11-26 19:50:01.267324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.575 [2024-11-26 19:50:01.267459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:10.575 BaseBdev4 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.575 [2024-11-26 19:50:01.273466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.575 [2024-11-26 19:50:01.275091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.575 [2024-11-26 19:50:01.275235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.575 [2024-11-26 19:50:01.275296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:10.575 [2024-11-26 19:50:01.275502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:09:10.575 [2024-11-26 19:50:01.275512] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:10.575 [2024-11-26 19:50:01.275716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:09:10.575 [2024-11-26 19:50:01.275847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:09:10.575 [2024-11-26 19:50:01.275854] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:09:10.575 [2024-11-26 19:50:01.275973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.575 "name": "raid_bdev1", 00:09:10.575 "uuid": "7aaaf729-4200-4b70-abf7-6a148e8d39fb", 00:09:10.575 "strip_size_kb": 0, 00:09:10.575 "state": "online", 00:09:10.575 "raid_level": "raid1", 00:09:10.575 "superblock": true, 00:09:10.575 "num_base_bdevs": 4, 00:09:10.575 "num_base_bdevs_discovered": 4, 00:09:10.575 "num_base_bdevs_operational": 4, 00:09:10.575 "base_bdevs_list": [ 00:09:10.575 { 00:09:10.575 "name": "BaseBdev1", 00:09:10.575 "uuid": "b4d7632a-4d76-5f4b-91d9-70b9f32e20ea", 00:09:10.575 "is_configured": true, 00:09:10.575 "data_offset": 2048, 00:09:10.575 "data_size": 63488 00:09:10.575 }, 00:09:10.575 { 00:09:10.575 "name": "BaseBdev2", 00:09:10.575 "uuid": "b8b10b20-ea53-5f5a-8ea1-a0a58a5f25b7", 00:09:10.575 "is_configured": true, 00:09:10.575 "data_offset": 2048, 00:09:10.575 "data_size": 63488 00:09:10.575 }, 00:09:10.575 { 00:09:10.575 "name": "BaseBdev3", 00:09:10.575 "uuid": "0f821896-222a-5011-a5ee-d4bca31cd7e2", 00:09:10.575 "is_configured": true, 00:09:10.575 "data_offset": 2048, 00:09:10.575 "data_size": 63488 00:09:10.575 }, 00:09:10.575 { 00:09:10.575 "name": "BaseBdev4", 00:09:10.575 "uuid": "0e0bbb6b-6089-523d-a2a6-ef17e739a28d", 00:09:10.575 "is_configured": true, 00:09:10.575 "data_offset": 2048, 00:09:10.575 "data_size": 63488 00:09:10.575 } 00:09:10.575 ] 00:09:10.575 }' 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.575 19:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.833 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:10.833 19:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:10.833 [2024-11-26 19:50:01.758434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.767 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.767 "name": "raid_bdev1", 00:09:11.767 "uuid": "7aaaf729-4200-4b70-abf7-6a148e8d39fb", 00:09:11.767 "strip_size_kb": 0, 00:09:11.767 "state": "online", 00:09:11.767 "raid_level": "raid1", 00:09:11.767 "superblock": true, 00:09:11.767 "num_base_bdevs": 4, 00:09:11.767 "num_base_bdevs_discovered": 4, 00:09:11.767 "num_base_bdevs_operational": 4, 00:09:11.767 "base_bdevs_list": [ 00:09:11.767 { 00:09:11.767 "name": "BaseBdev1", 00:09:11.768 "uuid": "b4d7632a-4d76-5f4b-91d9-70b9f32e20ea", 00:09:11.768 "is_configured": true, 00:09:11.768 "data_offset": 2048, 00:09:11.768 "data_size": 63488 00:09:11.768 }, 00:09:11.768 { 00:09:11.768 "name": "BaseBdev2", 00:09:11.768 "uuid": "b8b10b20-ea53-5f5a-8ea1-a0a58a5f25b7", 00:09:11.768 "is_configured": true, 00:09:11.768 "data_offset": 2048, 00:09:11.768 "data_size": 63488 00:09:11.768 }, 00:09:11.768 { 00:09:11.768 "name": "BaseBdev3", 00:09:11.768 "uuid": "0f821896-222a-5011-a5ee-d4bca31cd7e2", 00:09:11.768 "is_configured": true, 00:09:11.768 "data_offset": 2048, 00:09:11.768 "data_size": 63488 00:09:11.768 }, 00:09:11.768 { 00:09:11.768 "name": "BaseBdev4", 00:09:11.768 "uuid": "0e0bbb6b-6089-523d-a2a6-ef17e739a28d", 00:09:11.768 "is_configured": true, 00:09:11.768 "data_offset": 2048, 00:09:11.768 "data_size": 63488 00:09:11.768 } 00:09:11.768 ] 00:09:11.768 }' 00:09:11.768 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.768 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.025 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:12.025 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.025 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.025 [2024-11-26 19:50:02.940078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.025 [2024-11-26 19:50:02.940213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.025 [2024-11-26 19:50:02.942679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.025 [2024-11-26 19:50:02.942811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.025 [2024-11-26 19:50:02.942965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.025 [2024-11-26 19:50:02.943035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:09:12.025 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.025 { 00:09:12.025 "results": [ 00:09:12.025 { 00:09:12.025 "job": "raid_bdev1", 00:09:12.025 "core_mask": "0x1", 00:09:12.025 "workload": "randrw", 00:09:12.025 "percentage": 50, 00:09:12.025 "status": "finished", 00:09:12.025 "queue_depth": 1, 00:09:12.025 "io_size": 131072, 00:09:12.025 "runtime": 1.180143, 00:09:12.025 "iops": 12254.447130559602, 00:09:12.025 "mibps": 1531.8058913199502, 00:09:12.025 "io_failed": 0, 00:09:12.025 "io_timeout": 0, 00:09:12.025 "avg_latency_us": 79.0572692360882, 00:09:12.025 "min_latency_us": 23.04, 00:09:12.025 "max_latency_us": 1424.1476923076923 00:09:12.025 } 00:09:12.025 ], 00:09:12.025 "core_count": 1 00:09:12.025 } 00:09:12.025 19:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73002 00:09:12.025 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73002 ']' 00:09:12.025 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73002 00:09:12.025 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:12.025 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.025 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73002 00:09:12.282 killing process with pid 73002 00:09:12.282 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.282 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.282 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73002' 00:09:12.282 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73002 00:09:12.282 [2024-11-26 19:50:02.969692] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.282 19:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73002 00:09:12.282 [2024-11-26 19:50:03.138330] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.847 19:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:12.847 19:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aJQb07AI8f 00:09:12.847 19:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:12.847 ************************************ 00:09:12.847 END TEST raid_read_error_test 00:09:12.847 ************************************ 00:09:12.847 19:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:12.847 19:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:12.847 19:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:12.847 19:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:12.847 19:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:12.847 00:09:12.847 real 0m3.615s 00:09:12.847 user 0m4.292s 00:09:12.847 sys 0m0.460s 00:09:12.847 19:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.847 19:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.105 19:50:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:09:13.105 19:50:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:13.105 19:50:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.105 19:50:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.105 ************************************ 00:09:13.105 START TEST raid_write_error_test 00:09:13.105 ************************************ 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:13.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qv7o3DW98p 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73142 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73142 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73142 ']' 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.105 19:50:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:13.105 [2024-11-26 19:50:03.899014] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:09:13.105 [2024-11-26 19:50:03.899289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73142 ] 00:09:13.364 [2024-11-26 19:50:04.057409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.364 [2024-11-26 19:50:04.157717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.364 [2024-11-26 19:50:04.279336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.364 [2024-11-26 19:50:04.279522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.977 BaseBdev1_malloc 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.977 true 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.977 [2024-11-26 19:50:04.734081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:13.977 [2024-11-26 19:50:04.734222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.977 [2024-11-26 19:50:04.734244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:13.977 [2024-11-26 19:50:04.734254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.977 [2024-11-26 19:50:04.736107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.977 [2024-11-26 19:50:04.736141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:13.977 BaseBdev1 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.977 BaseBdev2_malloc 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.977 true 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.977 [2024-11-26 19:50:04.775494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:13.977 [2024-11-26 19:50:04.775605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.977 [2024-11-26 19:50:04.775622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:13.977 [2024-11-26 19:50:04.775632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.977 [2024-11-26 19:50:04.777507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.977 [2024-11-26 19:50:04.777530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:13.977 BaseBdev2 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.977 BaseBdev3_malloc 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.977 true 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.977 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.978 [2024-11-26 19:50:04.831596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:13.978 [2024-11-26 19:50:04.831726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.978 [2024-11-26 19:50:04.831747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:13.978 [2024-11-26 19:50:04.831756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.978 [2024-11-26 19:50:04.833610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.978 [2024-11-26 19:50:04.833640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:13.978 BaseBdev3 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.978 BaseBdev4_malloc 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.978 true 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.978 [2024-11-26 19:50:04.873313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:13.978 [2024-11-26 19:50:04.873372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.978 [2024-11-26 19:50:04.873387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:13.978 [2024-11-26 19:50:04.873397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.978 [2024-11-26 19:50:04.875227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.978 [2024-11-26 19:50:04.875351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:13.978 BaseBdev4 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.978 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.978 [2024-11-26 19:50:04.881378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.260 [2024-11-26 19:50:04.883059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.260 [2024-11-26 19:50:04.883122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.260 [2024-11-26 19:50:04.883177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:14.260 [2024-11-26 19:50:04.883389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:09:14.260 [2024-11-26 19:50:04.883400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:14.260 [2024-11-26 19:50:04.883604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:09:14.260 [2024-11-26 19:50:04.883732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:09:14.260 [2024-11-26 19:50:04.883745] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:09:14.260 [2024-11-26 19:50:04.883858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.260 "name": "raid_bdev1", 00:09:14.260 "uuid": "02b884e0-536d-48d3-b7f5-ba2e51ce330e", 00:09:14.260 "strip_size_kb": 0, 00:09:14.260 "state": "online", 00:09:14.260 "raid_level": "raid1", 00:09:14.260 "superblock": true, 00:09:14.260 "num_base_bdevs": 4, 00:09:14.260 "num_base_bdevs_discovered": 4, 00:09:14.260 "num_base_bdevs_operational": 4, 00:09:14.260 "base_bdevs_list": [ 00:09:14.260 { 00:09:14.260 "name": "BaseBdev1", 00:09:14.260 "uuid": "10d22c7f-b105-5e71-8d05-ebea4aaae744", 00:09:14.260 "is_configured": true, 00:09:14.260 "data_offset": 2048, 00:09:14.260 "data_size": 63488 00:09:14.260 }, 00:09:14.260 { 00:09:14.260 "name": "BaseBdev2", 00:09:14.260 "uuid": "3daef71e-25f1-5c5c-b8b8-a20e2948da67", 00:09:14.260 "is_configured": true, 00:09:14.260 "data_offset": 2048, 00:09:14.260 "data_size": 63488 00:09:14.260 }, 00:09:14.260 { 00:09:14.260 "name": "BaseBdev3", 00:09:14.260 "uuid": "2c562739-d42f-502b-a022-9da6be757de0", 00:09:14.260 "is_configured": true, 00:09:14.260 "data_offset": 2048, 00:09:14.260 "data_size": 63488 00:09:14.260 }, 00:09:14.260 { 00:09:14.260 "name": "BaseBdev4", 00:09:14.260 "uuid": "2f398fb7-654f-55c5-b525-b74fc6ad0790", 00:09:14.260 "is_configured": true, 00:09:14.260 "data_offset": 2048, 00:09:14.260 "data_size": 63488 00:09:14.260 } 00:09:14.260 ] 00:09:14.260 }' 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.260 19:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.260 19:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:14.260 19:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:14.518 [2024-11-26 19:50:05.282326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.452 [2024-11-26 19:50:06.200625] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:15.452 [2024-11-26 19:50:06.200683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.452 [2024-11-26 19:50:06.200897] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.452 "name": "raid_bdev1", 00:09:15.452 "uuid": "02b884e0-536d-48d3-b7f5-ba2e51ce330e", 00:09:15.452 "strip_size_kb": 0, 00:09:15.452 "state": "online", 00:09:15.452 "raid_level": "raid1", 00:09:15.452 "superblock": true, 00:09:15.452 "num_base_bdevs": 4, 00:09:15.452 "num_base_bdevs_discovered": 3, 00:09:15.452 "num_base_bdevs_operational": 3, 00:09:15.452 "base_bdevs_list": [ 00:09:15.452 { 00:09:15.452 "name": null, 00:09:15.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.452 "is_configured": false, 00:09:15.452 "data_offset": 0, 00:09:15.452 "data_size": 63488 00:09:15.452 }, 00:09:15.452 { 00:09:15.452 "name": "BaseBdev2", 00:09:15.452 "uuid": "3daef71e-25f1-5c5c-b8b8-a20e2948da67", 00:09:15.452 "is_configured": true, 00:09:15.452 "data_offset": 2048, 00:09:15.452 "data_size": 63488 00:09:15.452 }, 00:09:15.452 { 00:09:15.452 "name": "BaseBdev3", 00:09:15.452 "uuid": "2c562739-d42f-502b-a022-9da6be757de0", 00:09:15.452 "is_configured": true, 00:09:15.452 "data_offset": 2048, 00:09:15.452 "data_size": 63488 00:09:15.452 }, 00:09:15.452 { 00:09:15.452 "name": "BaseBdev4", 00:09:15.452 "uuid": "2f398fb7-654f-55c5-b525-b74fc6ad0790", 00:09:15.452 "is_configured": true, 00:09:15.452 "data_offset": 2048, 00:09:15.452 "data_size": 63488 00:09:15.452 } 00:09:15.452 ] 00:09:15.452 }' 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.452 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.715 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:15.715 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.715 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.715 [2024-11-26 19:50:06.503720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.716 [2024-11-26 19:50:06.503861] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.716 [2024-11-26 19:50:06.506319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.716 [2024-11-26 19:50:06.506368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.716 [2024-11-26 19:50:06.506463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.716 [2024-11-26 19:50:06.506472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:09:15.716 { 00:09:15.716 "results": [ 00:09:15.716 { 00:09:15.716 "job": "raid_bdev1", 00:09:15.716 "core_mask": "0x1", 00:09:15.716 "workload": "randrw", 00:09:15.716 "percentage": 50, 00:09:15.716 "status": "finished", 00:09:15.716 "queue_depth": 1, 00:09:15.716 "io_size": 131072, 00:09:15.716 "runtime": 1.219877, 00:09:15.716 "iops": 13161.982724487796, 00:09:15.716 "mibps": 1645.2478405609745, 00:09:15.716 "io_failed": 0, 00:09:15.716 "io_timeout": 0, 00:09:15.716 "avg_latency_us": 73.44196542869189, 00:09:15.716 "min_latency_us": 22.744615384615386, 00:09:15.716 "max_latency_us": 1392.64 00:09:15.716 } 00:09:15.716 ], 00:09:15.716 "core_count": 1 00:09:15.716 } 00:09:15.716 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.716 19:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73142 00:09:15.716 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73142 ']' 00:09:15.716 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73142 00:09:15.716 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:15.716 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.716 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73142 00:09:15.716 killing process with pid 73142 00:09:15.716 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.716 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.716 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73142' 00:09:15.716 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73142 00:09:15.716 [2024-11-26 19:50:06.535577] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.716 19:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73142 00:09:15.979 [2024-11-26 19:50:06.702250] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.547 19:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qv7o3DW98p 00:09:16.547 19:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:16.547 19:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:16.547 ************************************ 00:09:16.547 END TEST raid_write_error_test 00:09:16.547 ************************************ 00:09:16.547 19:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:16.547 19:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:16.547 19:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.547 19:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:16.547 19:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:16.547 00:09:16.547 real 0m3.530s 00:09:16.547 user 0m4.087s 00:09:16.547 sys 0m0.465s 00:09:16.547 19:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.547 19:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.547 19:50:07 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:09:16.547 19:50:07 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:09:16.547 19:50:07 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:09:16.547 19:50:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:16.547 19:50:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.547 19:50:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.547 ************************************ 00:09:16.547 START TEST raid_rebuild_test 00:09:16.547 ************************************ 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:16.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:09:16.547 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:09:16.548 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:09:16.548 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:09:16.548 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:09:16.548 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:09:16.548 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:09:16.548 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=73269 00:09:16.548 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 73269 00:09:16.548 19:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 73269 ']' 00:09:16.548 19:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.548 19:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.548 19:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.548 19:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.548 19:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.548 19:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:09:16.548 I/O size of 3145728 is greater than zero copy threshold (65536). 00:09:16.548 Zero copy mechanism will not be used. 00:09:16.548 [2024-11-26 19:50:07.471020] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:09:16.548 [2024-11-26 19:50:07.471155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73269 ] 00:09:16.806 [2024-11-26 19:50:07.629661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.806 [2024-11-26 19:50:07.727683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.064 [2024-11-26 19:50:07.846983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.064 [2024-11-26 19:50:07.847009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.632 BaseBdev1_malloc 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.632 [2024-11-26 19:50:08.338301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:09:17.632 [2024-11-26 19:50:08.338488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.632 [2024-11-26 19:50:08.338516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:17.632 [2024-11-26 19:50:08.338526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.632 [2024-11-26 19:50:08.340371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.632 [2024-11-26 19:50:08.340426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:17.632 BaseBdev1 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.632 BaseBdev2_malloc 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.632 [2024-11-26 19:50:08.371136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:09:17.632 [2024-11-26 19:50:08.371178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.632 [2024-11-26 19:50:08.371196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:17.632 [2024-11-26 19:50:08.371206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.632 [2024-11-26 19:50:08.372983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.632 [2024-11-26 19:50:08.373012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:17.632 BaseBdev2 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.632 spare_malloc 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.632 spare_delay 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.632 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.632 [2024-11-26 19:50:08.425461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:09:17.632 [2024-11-26 19:50:08.425504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.632 [2024-11-26 19:50:08.425519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:17.632 [2024-11-26 19:50:08.425528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.632 [2024-11-26 19:50:08.427354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.633 [2024-11-26 19:50:08.427383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:09:17.633 spare 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.633 [2024-11-26 19:50:08.433507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.633 [2024-11-26 19:50:08.435080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.633 [2024-11-26 19:50:08.435247] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:17.633 [2024-11-26 19:50:08.435264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:17.633 [2024-11-26 19:50:08.435485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:17.633 [2024-11-26 19:50:08.435602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:17.633 [2024-11-26 19:50:08.435611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:17.633 [2024-11-26 19:50:08.435729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.633 "name": "raid_bdev1", 00:09:17.633 "uuid": "80de1a2b-76d5-4414-97c5-1ae5cccc9008", 00:09:17.633 "strip_size_kb": 0, 00:09:17.633 "state": "online", 00:09:17.633 "raid_level": "raid1", 00:09:17.633 "superblock": false, 00:09:17.633 "num_base_bdevs": 2, 00:09:17.633 "num_base_bdevs_discovered": 2, 00:09:17.633 "num_base_bdevs_operational": 2, 00:09:17.633 "base_bdevs_list": [ 00:09:17.633 { 00:09:17.633 "name": "BaseBdev1", 00:09:17.633 "uuid": "06fb440e-fccb-502b-9541-a03ea4246897", 00:09:17.633 "is_configured": true, 00:09:17.633 "data_offset": 0, 00:09:17.633 "data_size": 65536 00:09:17.633 }, 00:09:17.633 { 00:09:17.633 "name": "BaseBdev2", 00:09:17.633 "uuid": "0ad21409-b02b-50c0-b602-18699b6a70f4", 00:09:17.633 "is_configured": true, 00:09:17.633 "data_offset": 0, 00:09:17.633 "data_size": 65536 00:09:17.633 } 00:09:17.633 ] 00:09:17.633 }' 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.633 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 [2024-11-26 19:50:08.753840] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:17.894 19:50:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:09:18.152 [2024-11-26 19:50:09.001690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:18.152 /dev/nbd0 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.152 1+0 records in 00:09:18.152 1+0 records out 00:09:18.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251626 s, 16.3 MB/s 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:09:18.152 19:50:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:09:22.332 65536+0 records in 00:09:22.332 65536+0 records out 00:09:22.332 33554432 bytes (34 MB, 32 MiB) copied, 4.18377 s, 8.0 MB/s 00:09:22.332 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:22.332 19:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:22.332 19:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:22.332 19:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:22.332 19:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:09:22.332 19:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.332 19:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:22.590 [2024-11-26 19:50:13.434897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.590 [2024-11-26 19:50:13.458965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.590 "name": "raid_bdev1", 00:09:22.590 "uuid": "80de1a2b-76d5-4414-97c5-1ae5cccc9008", 00:09:22.590 "strip_size_kb": 0, 00:09:22.590 "state": "online", 00:09:22.590 "raid_level": "raid1", 00:09:22.590 "superblock": false, 00:09:22.590 "num_base_bdevs": 2, 00:09:22.590 "num_base_bdevs_discovered": 1, 00:09:22.590 "num_base_bdevs_operational": 1, 00:09:22.590 "base_bdevs_list": [ 00:09:22.590 { 00:09:22.590 "name": null, 00:09:22.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.590 "is_configured": false, 00:09:22.590 "data_offset": 0, 00:09:22.590 "data_size": 65536 00:09:22.590 }, 00:09:22.590 { 00:09:22.590 "name": "BaseBdev2", 00:09:22.590 "uuid": "0ad21409-b02b-50c0-b602-18699b6a70f4", 00:09:22.590 "is_configured": true, 00:09:22.590 "data_offset": 0, 00:09:22.590 "data_size": 65536 00:09:22.590 } 00:09:22.590 ] 00:09:22.590 }' 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.590 19:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.849 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:09:22.849 19:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.849 19:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.849 [2024-11-26 19:50:13.747035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:09:22.849 [2024-11-26 19:50:13.756117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:09:22.849 19:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.849 19:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:09:22.849 [2024-11-26 19:50:13.757621] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:24.218 "name": "raid_bdev1", 00:09:24.218 "uuid": "80de1a2b-76d5-4414-97c5-1ae5cccc9008", 00:09:24.218 "strip_size_kb": 0, 00:09:24.218 "state": "online", 00:09:24.218 "raid_level": "raid1", 00:09:24.218 "superblock": false, 00:09:24.218 "num_base_bdevs": 2, 00:09:24.218 "num_base_bdevs_discovered": 2, 00:09:24.218 "num_base_bdevs_operational": 2, 00:09:24.218 "process": { 00:09:24.218 "type": "rebuild", 00:09:24.218 "target": "spare", 00:09:24.218 "progress": { 00:09:24.218 "blocks": 20480, 00:09:24.218 "percent": 31 00:09:24.218 } 00:09:24.218 }, 00:09:24.218 "base_bdevs_list": [ 00:09:24.218 { 00:09:24.218 "name": "spare", 00:09:24.218 "uuid": "eef29930-6b47-5b8c-99a3-4e0a53dd6083", 00:09:24.218 "is_configured": true, 00:09:24.218 "data_offset": 0, 00:09:24.218 "data_size": 65536 00:09:24.218 }, 00:09:24.218 { 00:09:24.218 "name": "BaseBdev2", 00:09:24.218 "uuid": "0ad21409-b02b-50c0-b602-18699b6a70f4", 00:09:24.218 "is_configured": true, 00:09:24.218 "data_offset": 0, 00:09:24.218 "data_size": 65536 00:09:24.218 } 00:09:24.218 ] 00:09:24.218 }' 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.218 [2024-11-26 19:50:14.851835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:09:24.218 [2024-11-26 19:50:14.862586] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:09:24.218 [2024-11-26 19:50:14.862640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.218 [2024-11-26 19:50:14.862652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:09:24.218 [2024-11-26 19:50:14.862660] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.218 "name": "raid_bdev1", 00:09:24.218 "uuid": "80de1a2b-76d5-4414-97c5-1ae5cccc9008", 00:09:24.218 "strip_size_kb": 0, 00:09:24.218 "state": "online", 00:09:24.218 "raid_level": "raid1", 00:09:24.218 "superblock": false, 00:09:24.218 "num_base_bdevs": 2, 00:09:24.218 "num_base_bdevs_discovered": 1, 00:09:24.218 "num_base_bdevs_operational": 1, 00:09:24.218 "base_bdevs_list": [ 00:09:24.218 { 00:09:24.218 "name": null, 00:09:24.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.218 "is_configured": false, 00:09:24.218 "data_offset": 0, 00:09:24.218 "data_size": 65536 00:09:24.218 }, 00:09:24.218 { 00:09:24.218 "name": "BaseBdev2", 00:09:24.218 "uuid": "0ad21409-b02b-50c0-b602-18699b6a70f4", 00:09:24.218 "is_configured": true, 00:09:24.218 "data_offset": 0, 00:09:24.218 "data_size": 65536 00:09:24.218 } 00:09:24.218 ] 00:09:24.218 }' 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.218 19:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:24.475 "name": "raid_bdev1", 00:09:24.475 "uuid": "80de1a2b-76d5-4414-97c5-1ae5cccc9008", 00:09:24.475 "strip_size_kb": 0, 00:09:24.475 "state": "online", 00:09:24.475 "raid_level": "raid1", 00:09:24.475 "superblock": false, 00:09:24.475 "num_base_bdevs": 2, 00:09:24.475 "num_base_bdevs_discovered": 1, 00:09:24.475 "num_base_bdevs_operational": 1, 00:09:24.475 "base_bdevs_list": [ 00:09:24.475 { 00:09:24.475 "name": null, 00:09:24.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.475 "is_configured": false, 00:09:24.475 "data_offset": 0, 00:09:24.475 "data_size": 65536 00:09:24.475 }, 00:09:24.475 { 00:09:24.475 "name": "BaseBdev2", 00:09:24.475 "uuid": "0ad21409-b02b-50c0-b602-18699b6a70f4", 00:09:24.475 "is_configured": true, 00:09:24.475 "data_offset": 0, 00:09:24.475 "data_size": 65536 00:09:24.475 } 00:09:24.475 ] 00:09:24.475 }' 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.475 [2024-11-26 19:50:15.293544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:09:24.475 [2024-11-26 19:50:15.302377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.475 19:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:09:24.475 [2024-11-26 19:50:15.303929] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:09:25.422 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:25.422 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:25.422 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:25.422 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:25.422 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:25.422 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.422 19:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.422 19:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.422 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.422 19:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.422 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:25.422 "name": "raid_bdev1", 00:09:25.422 "uuid": "80de1a2b-76d5-4414-97c5-1ae5cccc9008", 00:09:25.422 "strip_size_kb": 0, 00:09:25.422 "state": "online", 00:09:25.422 "raid_level": "raid1", 00:09:25.422 "superblock": false, 00:09:25.422 "num_base_bdevs": 2, 00:09:25.422 "num_base_bdevs_discovered": 2, 00:09:25.422 "num_base_bdevs_operational": 2, 00:09:25.422 "process": { 00:09:25.422 "type": "rebuild", 00:09:25.422 "target": "spare", 00:09:25.422 "progress": { 00:09:25.422 "blocks": 20480, 00:09:25.422 "percent": 31 00:09:25.422 } 00:09:25.422 }, 00:09:25.422 "base_bdevs_list": [ 00:09:25.422 { 00:09:25.422 "name": "spare", 00:09:25.422 "uuid": "eef29930-6b47-5b8c-99a3-4e0a53dd6083", 00:09:25.422 "is_configured": true, 00:09:25.422 "data_offset": 0, 00:09:25.422 "data_size": 65536 00:09:25.422 }, 00:09:25.422 { 00:09:25.422 "name": "BaseBdev2", 00:09:25.422 "uuid": "0ad21409-b02b-50c0-b602-18699b6a70f4", 00:09:25.422 "is_configured": true, 00:09:25.422 "data_offset": 0, 00:09:25.422 "data_size": 65536 00:09:25.422 } 00:09:25.422 ] 00:09:25.422 }' 00:09:25.422 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=286 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:25.680 "name": "raid_bdev1", 00:09:25.680 "uuid": "80de1a2b-76d5-4414-97c5-1ae5cccc9008", 00:09:25.680 "strip_size_kb": 0, 00:09:25.680 "state": "online", 00:09:25.680 "raid_level": "raid1", 00:09:25.680 "superblock": false, 00:09:25.680 "num_base_bdevs": 2, 00:09:25.680 "num_base_bdevs_discovered": 2, 00:09:25.680 "num_base_bdevs_operational": 2, 00:09:25.680 "process": { 00:09:25.680 "type": "rebuild", 00:09:25.680 "target": "spare", 00:09:25.680 "progress": { 00:09:25.680 "blocks": 20480, 00:09:25.680 "percent": 31 00:09:25.680 } 00:09:25.680 }, 00:09:25.680 "base_bdevs_list": [ 00:09:25.680 { 00:09:25.680 "name": "spare", 00:09:25.680 "uuid": "eef29930-6b47-5b8c-99a3-4e0a53dd6083", 00:09:25.680 "is_configured": true, 00:09:25.680 "data_offset": 0, 00:09:25.680 "data_size": 65536 00:09:25.680 }, 00:09:25.680 { 00:09:25.680 "name": "BaseBdev2", 00:09:25.680 "uuid": "0ad21409-b02b-50c0-b602-18699b6a70f4", 00:09:25.680 "is_configured": true, 00:09:25.680 "data_offset": 0, 00:09:25.680 "data_size": 65536 00:09:25.680 } 00:09:25.680 ] 00:09:25.680 }' 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:25.680 19:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:09:26.610 19:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:09:26.610 19:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:26.610 19:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:26.610 19:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:26.610 19:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:26.610 19:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:26.610 19:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.610 19:50:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.610 19:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.610 19:50:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.610 19:50:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.610 19:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:26.610 "name": "raid_bdev1", 00:09:26.610 "uuid": "80de1a2b-76d5-4414-97c5-1ae5cccc9008", 00:09:26.610 "strip_size_kb": 0, 00:09:26.610 "state": "online", 00:09:26.610 "raid_level": "raid1", 00:09:26.610 "superblock": false, 00:09:26.610 "num_base_bdevs": 2, 00:09:26.610 "num_base_bdevs_discovered": 2, 00:09:26.610 "num_base_bdevs_operational": 2, 00:09:26.610 "process": { 00:09:26.610 "type": "rebuild", 00:09:26.610 "target": "spare", 00:09:26.610 "progress": { 00:09:26.610 "blocks": 43008, 00:09:26.610 "percent": 65 00:09:26.610 } 00:09:26.610 }, 00:09:26.610 "base_bdevs_list": [ 00:09:26.610 { 00:09:26.610 "name": "spare", 00:09:26.610 "uuid": "eef29930-6b47-5b8c-99a3-4e0a53dd6083", 00:09:26.610 "is_configured": true, 00:09:26.610 "data_offset": 0, 00:09:26.610 "data_size": 65536 00:09:26.610 }, 00:09:26.610 { 00:09:26.610 "name": "BaseBdev2", 00:09:26.611 "uuid": "0ad21409-b02b-50c0-b602-18699b6a70f4", 00:09:26.611 "is_configured": true, 00:09:26.611 "data_offset": 0, 00:09:26.611 "data_size": 65536 00:09:26.611 } 00:09:26.611 ] 00:09:26.611 }' 00:09:26.611 19:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:26.868 19:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:26.868 19:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:26.868 19:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:26.868 19:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:09:27.799 [2024-11-26 19:50:18.522329] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:09:27.799 [2024-11-26 19:50:18.522444] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:09:27.799 [2024-11-26 19:50:18.522506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.799 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:09:27.799 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:27.799 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:27.799 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:27.799 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:27.799 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:27.799 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.799 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.799 19:50:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:27.800 "name": "raid_bdev1", 00:09:27.800 "uuid": "80de1a2b-76d5-4414-97c5-1ae5cccc9008", 00:09:27.800 "strip_size_kb": 0, 00:09:27.800 "state": "online", 00:09:27.800 "raid_level": "raid1", 00:09:27.800 "superblock": false, 00:09:27.800 "num_base_bdevs": 2, 00:09:27.800 "num_base_bdevs_discovered": 2, 00:09:27.800 "num_base_bdevs_operational": 2, 00:09:27.800 "base_bdevs_list": [ 00:09:27.800 { 00:09:27.800 "name": "spare", 00:09:27.800 "uuid": "eef29930-6b47-5b8c-99a3-4e0a53dd6083", 00:09:27.800 "is_configured": true, 00:09:27.800 "data_offset": 0, 00:09:27.800 "data_size": 65536 00:09:27.800 }, 00:09:27.800 { 00:09:27.800 "name": "BaseBdev2", 00:09:27.800 "uuid": "0ad21409-b02b-50c0-b602-18699b6a70f4", 00:09:27.800 "is_configured": true, 00:09:27.800 "data_offset": 0, 00:09:27.800 "data_size": 65536 00:09:27.800 } 00:09:27.800 ] 00:09:27.800 }' 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:27.800 "name": "raid_bdev1", 00:09:27.800 "uuid": "80de1a2b-76d5-4414-97c5-1ae5cccc9008", 00:09:27.800 "strip_size_kb": 0, 00:09:27.800 "state": "online", 00:09:27.800 "raid_level": "raid1", 00:09:27.800 "superblock": false, 00:09:27.800 "num_base_bdevs": 2, 00:09:27.800 "num_base_bdevs_discovered": 2, 00:09:27.800 "num_base_bdevs_operational": 2, 00:09:27.800 "base_bdevs_list": [ 00:09:27.800 { 00:09:27.800 "name": "spare", 00:09:27.800 "uuid": "eef29930-6b47-5b8c-99a3-4e0a53dd6083", 00:09:27.800 "is_configured": true, 00:09:27.800 "data_offset": 0, 00:09:27.800 "data_size": 65536 00:09:27.800 }, 00:09:27.800 { 00:09:27.800 "name": "BaseBdev2", 00:09:27.800 "uuid": "0ad21409-b02b-50c0-b602-18699b6a70f4", 00:09:27.800 "is_configured": true, 00:09:27.800 "data_offset": 0, 00:09:27.800 "data_size": 65536 00:09:27.800 } 00:09:27.800 ] 00:09:27.800 }' 00:09:27.800 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:28.057 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.058 "name": "raid_bdev1", 00:09:28.058 "uuid": "80de1a2b-76d5-4414-97c5-1ae5cccc9008", 00:09:28.058 "strip_size_kb": 0, 00:09:28.058 "state": "online", 00:09:28.058 "raid_level": "raid1", 00:09:28.058 "superblock": false, 00:09:28.058 "num_base_bdevs": 2, 00:09:28.058 "num_base_bdevs_discovered": 2, 00:09:28.058 "num_base_bdevs_operational": 2, 00:09:28.058 "base_bdevs_list": [ 00:09:28.058 { 00:09:28.058 "name": "spare", 00:09:28.058 "uuid": "eef29930-6b47-5b8c-99a3-4e0a53dd6083", 00:09:28.058 "is_configured": true, 00:09:28.058 "data_offset": 0, 00:09:28.058 "data_size": 65536 00:09:28.058 }, 00:09:28.058 { 00:09:28.058 "name": "BaseBdev2", 00:09:28.058 "uuid": "0ad21409-b02b-50c0-b602-18699b6a70f4", 00:09:28.058 "is_configured": true, 00:09:28.058 "data_offset": 0, 00:09:28.058 "data_size": 65536 00:09:28.058 } 00:09:28.058 ] 00:09:28.058 }' 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.058 19:50:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.317 [2024-11-26 19:50:19.090850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:28.317 [2024-11-26 19:50:19.090884] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:28.317 [2024-11-26 19:50:19.090982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:28.317 [2024-11-26 19:50:19.091061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:28.317 [2024-11-26 19:50:19.091072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.317 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:09:28.575 /dev/nbd0 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.575 1+0 records in 00:09:28.575 1+0 records out 00:09:28.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531666 s, 7.7 MB/s 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.575 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:28.576 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:09:28.576 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.576 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.576 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:09:28.833 /dev/nbd1 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.833 1+0 records in 00:09:28.833 1+0 records out 00:09:28.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338226 s, 12.1 MB/s 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.833 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:28.834 19:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:09:28.834 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.834 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.834 19:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:09:28.834 19:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:09:28.834 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:28.834 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.834 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:28.834 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:09:28.834 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.834 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:29.091 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:29.091 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:29.091 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:29.091 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.091 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.091 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:29.091 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:09:29.091 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.091 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.091 19:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 73269 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 73269 ']' 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 73269 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73269 00:09:29.349 killing process with pid 73269 00:09:29.349 Received shutdown signal, test time was about 60.000000 seconds 00:09:29.349 00:09:29.349 Latency(us) 00:09:29.349 [2024-11-26T19:50:20.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.349 [2024-11-26T19:50:20.286Z] =================================================================================================================== 00:09:29.349 [2024-11-26T19:50:20.286Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73269' 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 73269 00:09:29.349 [2024-11-26 19:50:20.200832] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.349 19:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 73269 00:09:29.608 [2024-11-26 19:50:20.398323] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:09:30.543 00:09:30.543 real 0m13.769s 00:09:30.543 user 0m14.883s 00:09:30.543 sys 0m2.730s 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.543 ************************************ 00:09:30.543 END TEST raid_rebuild_test 00:09:30.543 ************************************ 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.543 19:50:21 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:09:30.543 19:50:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:30.543 19:50:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.543 19:50:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.543 ************************************ 00:09:30.543 START TEST raid_rebuild_test_sb 00:09:30.543 ************************************ 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:30.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=73675 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 73675 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73675 ']' 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.543 19:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:09:30.543 [2024-11-26 19:50:21.271802] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:09:30.543 I/O size of 3145728 is greater than zero copy threshold (65536). 00:09:30.543 Zero copy mechanism will not be used. 00:09:30.543 [2024-11-26 19:50:21.272070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73675 ] 00:09:30.543 [2024-11-26 19:50:21.414497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.801 [2024-11-26 19:50:21.516908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.801 [2024-11-26 19:50:21.638431] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.801 [2024-11-26 19:50:21.638476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.367 BaseBdev1_malloc 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.367 [2024-11-26 19:50:22.165075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:09:31.367 [2024-11-26 19:50:22.165133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.367 [2024-11-26 19:50:22.165154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:31.367 [2024-11-26 19:50:22.165164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.367 [2024-11-26 19:50:22.167030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.367 [2024-11-26 19:50:22.167162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:31.367 BaseBdev1 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.367 BaseBdev2_malloc 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.367 [2024-11-26 19:50:22.198509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:09:31.367 [2024-11-26 19:50:22.198561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.367 [2024-11-26 19:50:22.198582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:31.367 [2024-11-26 19:50:22.198591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.367 [2024-11-26 19:50:22.200450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.367 [2024-11-26 19:50:22.200480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:31.367 BaseBdev2 00:09:31.367 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.368 spare_malloc 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.368 spare_delay 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.368 [2024-11-26 19:50:22.253194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:09:31.368 [2024-11-26 19:50:22.253253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.368 [2024-11-26 19:50:22.253272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:31.368 [2024-11-26 19:50:22.253281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.368 [2024-11-26 19:50:22.255238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.368 [2024-11-26 19:50:22.255274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:09:31.368 spare 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.368 [2024-11-26 19:50:22.265257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.368 [2024-11-26 19:50:22.267026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.368 [2024-11-26 19:50:22.267182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:31.368 [2024-11-26 19:50:22.267193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:31.368 [2024-11-26 19:50:22.267438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:31.368 [2024-11-26 19:50:22.267577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:31.368 [2024-11-26 19:50:22.267590] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:31.368 [2024-11-26 19:50:22.267724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.368 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.626 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.626 "name": "raid_bdev1", 00:09:31.626 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:31.626 "strip_size_kb": 0, 00:09:31.626 "state": "online", 00:09:31.626 "raid_level": "raid1", 00:09:31.626 "superblock": true, 00:09:31.626 "num_base_bdevs": 2, 00:09:31.626 "num_base_bdevs_discovered": 2, 00:09:31.626 "num_base_bdevs_operational": 2, 00:09:31.626 "base_bdevs_list": [ 00:09:31.626 { 00:09:31.626 "name": "BaseBdev1", 00:09:31.626 "uuid": "cba5343d-ff95-5c36-9915-a73cd2c8096b", 00:09:31.627 "is_configured": true, 00:09:31.627 "data_offset": 2048, 00:09:31.627 "data_size": 63488 00:09:31.627 }, 00:09:31.627 { 00:09:31.627 "name": "BaseBdev2", 00:09:31.627 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:31.627 "is_configured": true, 00:09:31.627 "data_offset": 2048, 00:09:31.627 "data_size": 63488 00:09:31.627 } 00:09:31.627 ] 00:09:31.627 }' 00:09:31.627 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.627 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.885 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:31.885 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.885 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.885 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:09:31.885 [2024-11-26 19:50:22.585570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.885 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.885 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:09:31.885 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.885 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.885 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.885 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:31.886 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:09:32.175 [2024-11-26 19:50:22.833444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:32.175 /dev/nbd0 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.175 1+0 records in 00:09:32.175 1+0 records out 00:09:32.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277008 s, 14.8 MB/s 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:09:32.175 19:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:09:36.383 63488+0 records in 00:09:36.383 63488+0 records out 00:09:36.383 32505856 bytes (33 MB, 31 MiB) copied, 4.06976 s, 8.0 MB/s 00:09:36.383 19:50:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:36.383 19:50:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:36.383 19:50:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:36.383 19:50:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:36.383 19:50:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:09:36.383 19:50:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.383 19:50:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:36.383 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:36.383 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:36.383 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:36.383 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.383 [2024-11-26 19:50:27.168509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.383 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.383 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:36.383 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:09:36.383 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.383 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.384 [2024-11-26 19:50:27.176655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.384 "name": "raid_bdev1", 00:09:36.384 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:36.384 "strip_size_kb": 0, 00:09:36.384 "state": "online", 00:09:36.384 "raid_level": "raid1", 00:09:36.384 "superblock": true, 00:09:36.384 "num_base_bdevs": 2, 00:09:36.384 "num_base_bdevs_discovered": 1, 00:09:36.384 "num_base_bdevs_operational": 1, 00:09:36.384 "base_bdevs_list": [ 00:09:36.384 { 00:09:36.384 "name": null, 00:09:36.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.384 "is_configured": false, 00:09:36.384 "data_offset": 0, 00:09:36.384 "data_size": 63488 00:09:36.384 }, 00:09:36.384 { 00:09:36.384 "name": "BaseBdev2", 00:09:36.384 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:36.384 "is_configured": true, 00:09:36.384 "data_offset": 2048, 00:09:36.384 "data_size": 63488 00:09:36.384 } 00:09:36.384 ] 00:09:36.384 }' 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.384 19:50:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.642 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:09:36.642 19:50:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.642 19:50:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.642 [2024-11-26 19:50:27.492774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:09:36.642 [2024-11-26 19:50:27.502980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:09:36.642 19:50:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.642 19:50:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:09:36.642 [2024-11-26 19:50:27.504792] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:09:37.679 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:37.679 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:37.679 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:37.679 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:37.679 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:37.679 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.679 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.679 19:50:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.679 19:50:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.679 19:50:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.679 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:37.679 "name": "raid_bdev1", 00:09:37.679 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:37.680 "strip_size_kb": 0, 00:09:37.680 "state": "online", 00:09:37.680 "raid_level": "raid1", 00:09:37.680 "superblock": true, 00:09:37.680 "num_base_bdevs": 2, 00:09:37.680 "num_base_bdevs_discovered": 2, 00:09:37.680 "num_base_bdevs_operational": 2, 00:09:37.680 "process": { 00:09:37.680 "type": "rebuild", 00:09:37.680 "target": "spare", 00:09:37.680 "progress": { 00:09:37.680 "blocks": 20480, 00:09:37.680 "percent": 32 00:09:37.680 } 00:09:37.680 }, 00:09:37.680 "base_bdevs_list": [ 00:09:37.680 { 00:09:37.680 "name": "spare", 00:09:37.680 "uuid": "79fe64c8-f749-548d-b901-0ffbc8d10712", 00:09:37.680 "is_configured": true, 00:09:37.680 "data_offset": 2048, 00:09:37.680 "data_size": 63488 00:09:37.680 }, 00:09:37.680 { 00:09:37.680 "name": "BaseBdev2", 00:09:37.680 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:37.680 "is_configured": true, 00:09:37.680 "data_offset": 2048, 00:09:37.680 "data_size": 63488 00:09:37.680 } 00:09:37.680 ] 00:09:37.680 }' 00:09:37.680 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:37.680 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:37.680 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:37.680 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:37.680 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:09:37.680 19:50:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.680 19:50:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.680 [2024-11-26 19:50:28.610927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:09:37.680 [2024-11-26 19:50:28.611473] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:09:37.680 [2024-11-26 19:50:28.611549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.680 [2024-11-26 19:50:28.611567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:09:37.680 [2024-11-26 19:50:28.611578] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.939 "name": "raid_bdev1", 00:09:37.939 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:37.939 "strip_size_kb": 0, 00:09:37.939 "state": "online", 00:09:37.939 "raid_level": "raid1", 00:09:37.939 "superblock": true, 00:09:37.939 "num_base_bdevs": 2, 00:09:37.939 "num_base_bdevs_discovered": 1, 00:09:37.939 "num_base_bdevs_operational": 1, 00:09:37.939 "base_bdevs_list": [ 00:09:37.939 { 00:09:37.939 "name": null, 00:09:37.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.939 "is_configured": false, 00:09:37.939 "data_offset": 0, 00:09:37.939 "data_size": 63488 00:09:37.939 }, 00:09:37.939 { 00:09:37.939 "name": "BaseBdev2", 00:09:37.939 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:37.939 "is_configured": true, 00:09:37.939 "data_offset": 2048, 00:09:37.939 "data_size": 63488 00:09:37.939 } 00:09:37.939 ] 00:09:37.939 }' 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.939 19:50:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.198 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:09:38.198 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:38.198 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:09:38.198 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:09:38.198 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:38.198 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.198 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.198 19:50:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.198 19:50:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.198 19:50:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.198 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:38.198 "name": "raid_bdev1", 00:09:38.198 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:38.198 "strip_size_kb": 0, 00:09:38.198 "state": "online", 00:09:38.198 "raid_level": "raid1", 00:09:38.198 "superblock": true, 00:09:38.198 "num_base_bdevs": 2, 00:09:38.198 "num_base_bdevs_discovered": 1, 00:09:38.198 "num_base_bdevs_operational": 1, 00:09:38.198 "base_bdevs_list": [ 00:09:38.198 { 00:09:38.198 "name": null, 00:09:38.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.198 "is_configured": false, 00:09:38.198 "data_offset": 0, 00:09:38.198 "data_size": 63488 00:09:38.198 }, 00:09:38.198 { 00:09:38.198 "name": "BaseBdev2", 00:09:38.198 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:38.198 "is_configured": true, 00:09:38.198 "data_offset": 2048, 00:09:38.198 "data_size": 63488 00:09:38.198 } 00:09:38.198 ] 00:09:38.198 }' 00:09:38.198 19:50:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:38.198 19:50:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:09:38.198 19:50:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:38.198 19:50:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:09:38.198 19:50:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:09:38.198 19:50:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.198 19:50:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.198 [2024-11-26 19:50:29.035835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:09:38.198 [2024-11-26 19:50:29.045509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:09:38.198 19:50:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.198 19:50:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:09:38.198 [2024-11-26 19:50:29.047278] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:09:39.130 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:39.131 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:39.131 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:39.131 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:39.131 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:39.131 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.131 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.131 19:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.131 19:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.389 19:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.389 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:39.389 "name": "raid_bdev1", 00:09:39.389 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:39.389 "strip_size_kb": 0, 00:09:39.389 "state": "online", 00:09:39.389 "raid_level": "raid1", 00:09:39.389 "superblock": true, 00:09:39.389 "num_base_bdevs": 2, 00:09:39.389 "num_base_bdevs_discovered": 2, 00:09:39.389 "num_base_bdevs_operational": 2, 00:09:39.389 "process": { 00:09:39.389 "type": "rebuild", 00:09:39.389 "target": "spare", 00:09:39.389 "progress": { 00:09:39.389 "blocks": 20480, 00:09:39.389 "percent": 32 00:09:39.389 } 00:09:39.389 }, 00:09:39.389 "base_bdevs_list": [ 00:09:39.389 { 00:09:39.389 "name": "spare", 00:09:39.389 "uuid": "79fe64c8-f749-548d-b901-0ffbc8d10712", 00:09:39.389 "is_configured": true, 00:09:39.389 "data_offset": 2048, 00:09:39.389 "data_size": 63488 00:09:39.389 }, 00:09:39.389 { 00:09:39.389 "name": "BaseBdev2", 00:09:39.389 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:39.389 "is_configured": true, 00:09:39.389 "data_offset": 2048, 00:09:39.389 "data_size": 63488 00:09:39.389 } 00:09:39.389 ] 00:09:39.389 }' 00:09:39.389 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:39.389 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:09:39.390 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=300 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:39.390 "name": "raid_bdev1", 00:09:39.390 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:39.390 "strip_size_kb": 0, 00:09:39.390 "state": "online", 00:09:39.390 "raid_level": "raid1", 00:09:39.390 "superblock": true, 00:09:39.390 "num_base_bdevs": 2, 00:09:39.390 "num_base_bdevs_discovered": 2, 00:09:39.390 "num_base_bdevs_operational": 2, 00:09:39.390 "process": { 00:09:39.390 "type": "rebuild", 00:09:39.390 "target": "spare", 00:09:39.390 "progress": { 00:09:39.390 "blocks": 22528, 00:09:39.390 "percent": 35 00:09:39.390 } 00:09:39.390 }, 00:09:39.390 "base_bdevs_list": [ 00:09:39.390 { 00:09:39.390 "name": "spare", 00:09:39.390 "uuid": "79fe64c8-f749-548d-b901-0ffbc8d10712", 00:09:39.390 "is_configured": true, 00:09:39.390 "data_offset": 2048, 00:09:39.390 "data_size": 63488 00:09:39.390 }, 00:09:39.390 { 00:09:39.390 "name": "BaseBdev2", 00:09:39.390 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:39.390 "is_configured": true, 00:09:39.390 "data_offset": 2048, 00:09:39.390 "data_size": 63488 00:09:39.390 } 00:09:39.390 ] 00:09:39.390 }' 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:39.390 19:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:09:40.324 19:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:09:40.324 19:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:40.324 19:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:40.324 19:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:40.324 19:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:40.324 19:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:40.324 19:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.324 19:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.324 19:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.324 19:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.583 19:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.583 19:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:40.583 "name": "raid_bdev1", 00:09:40.583 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:40.583 "strip_size_kb": 0, 00:09:40.583 "state": "online", 00:09:40.583 "raid_level": "raid1", 00:09:40.583 "superblock": true, 00:09:40.583 "num_base_bdevs": 2, 00:09:40.583 "num_base_bdevs_discovered": 2, 00:09:40.583 "num_base_bdevs_operational": 2, 00:09:40.583 "process": { 00:09:40.583 "type": "rebuild", 00:09:40.583 "target": "spare", 00:09:40.583 "progress": { 00:09:40.583 "blocks": 43008, 00:09:40.583 "percent": 67 00:09:40.583 } 00:09:40.583 }, 00:09:40.583 "base_bdevs_list": [ 00:09:40.583 { 00:09:40.583 "name": "spare", 00:09:40.583 "uuid": "79fe64c8-f749-548d-b901-0ffbc8d10712", 00:09:40.583 "is_configured": true, 00:09:40.583 "data_offset": 2048, 00:09:40.583 "data_size": 63488 00:09:40.583 }, 00:09:40.583 { 00:09:40.583 "name": "BaseBdev2", 00:09:40.583 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:40.583 "is_configured": true, 00:09:40.583 "data_offset": 2048, 00:09:40.583 "data_size": 63488 00:09:40.583 } 00:09:40.583 ] 00:09:40.583 }' 00:09:40.583 19:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:40.583 19:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:40.583 19:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:40.583 19:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:40.583 19:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:09:41.518 [2024-11-26 19:50:32.164968] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:09:41.518 [2024-11-26 19:50:32.165053] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:09:41.518 [2024-11-26 19:50:32.165170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:41.518 "name": "raid_bdev1", 00:09:41.518 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:41.518 "strip_size_kb": 0, 00:09:41.518 "state": "online", 00:09:41.518 "raid_level": "raid1", 00:09:41.518 "superblock": true, 00:09:41.518 "num_base_bdevs": 2, 00:09:41.518 "num_base_bdevs_discovered": 2, 00:09:41.518 "num_base_bdevs_operational": 2, 00:09:41.518 "base_bdevs_list": [ 00:09:41.518 { 00:09:41.518 "name": "spare", 00:09:41.518 "uuid": "79fe64c8-f749-548d-b901-0ffbc8d10712", 00:09:41.518 "is_configured": true, 00:09:41.518 "data_offset": 2048, 00:09:41.518 "data_size": 63488 00:09:41.518 }, 00:09:41.518 { 00:09:41.518 "name": "BaseBdev2", 00:09:41.518 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:41.518 "is_configured": true, 00:09:41.518 "data_offset": 2048, 00:09:41.518 "data_size": 63488 00:09:41.518 } 00:09:41.518 ] 00:09:41.518 }' 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:09:41.518 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:41.777 "name": "raid_bdev1", 00:09:41.777 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:41.777 "strip_size_kb": 0, 00:09:41.777 "state": "online", 00:09:41.777 "raid_level": "raid1", 00:09:41.777 "superblock": true, 00:09:41.777 "num_base_bdevs": 2, 00:09:41.777 "num_base_bdevs_discovered": 2, 00:09:41.777 "num_base_bdevs_operational": 2, 00:09:41.777 "base_bdevs_list": [ 00:09:41.777 { 00:09:41.777 "name": "spare", 00:09:41.777 "uuid": "79fe64c8-f749-548d-b901-0ffbc8d10712", 00:09:41.777 "is_configured": true, 00:09:41.777 "data_offset": 2048, 00:09:41.777 "data_size": 63488 00:09:41.777 }, 00:09:41.777 { 00:09:41.777 "name": "BaseBdev2", 00:09:41.777 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:41.777 "is_configured": true, 00:09:41.777 "data_offset": 2048, 00:09:41.777 "data_size": 63488 00:09:41.777 } 00:09:41.777 ] 00:09:41.777 }' 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:09:41.777 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.778 "name": "raid_bdev1", 00:09:41.778 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:41.778 "strip_size_kb": 0, 00:09:41.778 "state": "online", 00:09:41.778 "raid_level": "raid1", 00:09:41.778 "superblock": true, 00:09:41.778 "num_base_bdevs": 2, 00:09:41.778 "num_base_bdevs_discovered": 2, 00:09:41.778 "num_base_bdevs_operational": 2, 00:09:41.778 "base_bdevs_list": [ 00:09:41.778 { 00:09:41.778 "name": "spare", 00:09:41.778 "uuid": "79fe64c8-f749-548d-b901-0ffbc8d10712", 00:09:41.778 "is_configured": true, 00:09:41.778 "data_offset": 2048, 00:09:41.778 "data_size": 63488 00:09:41.778 }, 00:09:41.778 { 00:09:41.778 "name": "BaseBdev2", 00:09:41.778 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:41.778 "is_configured": true, 00:09:41.778 "data_offset": 2048, 00:09:41.778 "data_size": 63488 00:09:41.778 } 00:09:41.778 ] 00:09:41.778 }' 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.778 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.036 [2024-11-26 19:50:32.868382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.036 [2024-11-26 19:50:32.868412] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.036 [2024-11-26 19:50:32.868492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.036 [2024-11-26 19:50:32.868559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.036 [2024-11-26 19:50:32.868570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.036 19:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:09:42.294 /dev/nbd0 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.294 1+0 records in 00:09:42.294 1+0 records out 00:09:42.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048415 s, 8.5 MB/s 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.294 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:09:42.552 /dev/nbd1 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.552 1+0 records in 00:09:42.552 1+0 records out 00:09:42.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689792 s, 5.9 MB/s 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.552 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.553 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:42.873 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:09:43.159 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:43.159 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:43.159 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:43.159 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.159 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.159 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:43.159 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:09:43.159 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.159 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:09:43.159 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:09:43.159 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.159 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.160 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.160 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:09:43.160 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.160 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.160 [2024-11-26 19:50:33.969923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:09:43.160 [2024-11-26 19:50:33.970072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.160 [2024-11-26 19:50:33.970150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:43.160 [2024-11-26 19:50:33.970271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.160 [2024-11-26 19:50:33.972721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.160 [2024-11-26 19:50:33.972823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:09:43.160 [2024-11-26 19:50:33.972974] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:09:43.160 [2024-11-26 19:50:33.973076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:09:43.160 [2024-11-26 19:50:33.973269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.160 spare 00:09:43.160 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.160 19:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:09:43.160 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.160 19:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.160 [2024-11-26 19:50:34.073447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:43.160 [2024-11-26 19:50:34.073509] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:43.160 [2024-11-26 19:50:34.073871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:09:43.160 [2024-11-26 19:50:34.074075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:43.160 [2024-11-26 19:50:34.074088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:09:43.160 [2024-11-26 19:50:34.074286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.160 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.160 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:43.160 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.160 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.160 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.160 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.160 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.160 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.160 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.160 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.160 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.419 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.419 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.419 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.419 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.419 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.419 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.419 "name": "raid_bdev1", 00:09:43.419 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:43.419 "strip_size_kb": 0, 00:09:43.419 "state": "online", 00:09:43.419 "raid_level": "raid1", 00:09:43.419 "superblock": true, 00:09:43.419 "num_base_bdevs": 2, 00:09:43.419 "num_base_bdevs_discovered": 2, 00:09:43.419 "num_base_bdevs_operational": 2, 00:09:43.419 "base_bdevs_list": [ 00:09:43.419 { 00:09:43.419 "name": "spare", 00:09:43.419 "uuid": "79fe64c8-f749-548d-b901-0ffbc8d10712", 00:09:43.419 "is_configured": true, 00:09:43.419 "data_offset": 2048, 00:09:43.419 "data_size": 63488 00:09:43.419 }, 00:09:43.419 { 00:09:43.419 "name": "BaseBdev2", 00:09:43.419 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:43.419 "is_configured": true, 00:09:43.419 "data_offset": 2048, 00:09:43.419 "data_size": 63488 00:09:43.419 } 00:09:43.419 ] 00:09:43.419 }' 00:09:43.419 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.419 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:43.677 "name": "raid_bdev1", 00:09:43.677 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:43.677 "strip_size_kb": 0, 00:09:43.677 "state": "online", 00:09:43.677 "raid_level": "raid1", 00:09:43.677 "superblock": true, 00:09:43.677 "num_base_bdevs": 2, 00:09:43.677 "num_base_bdevs_discovered": 2, 00:09:43.677 "num_base_bdevs_operational": 2, 00:09:43.677 "base_bdevs_list": [ 00:09:43.677 { 00:09:43.677 "name": "spare", 00:09:43.677 "uuid": "79fe64c8-f749-548d-b901-0ffbc8d10712", 00:09:43.677 "is_configured": true, 00:09:43.677 "data_offset": 2048, 00:09:43.677 "data_size": 63488 00:09:43.677 }, 00:09:43.677 { 00:09:43.677 "name": "BaseBdev2", 00:09:43.677 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:43.677 "is_configured": true, 00:09:43.677 "data_offset": 2048, 00:09:43.677 "data_size": 63488 00:09:43.677 } 00:09:43.677 ] 00:09:43.677 }' 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.677 [2024-11-26 19:50:34.534383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.677 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.677 "name": "raid_bdev1", 00:09:43.677 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:43.677 "strip_size_kb": 0, 00:09:43.677 "state": "online", 00:09:43.677 "raid_level": "raid1", 00:09:43.677 "superblock": true, 00:09:43.677 "num_base_bdevs": 2, 00:09:43.677 "num_base_bdevs_discovered": 1, 00:09:43.677 "num_base_bdevs_operational": 1, 00:09:43.677 "base_bdevs_list": [ 00:09:43.677 { 00:09:43.677 "name": null, 00:09:43.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.677 "is_configured": false, 00:09:43.677 "data_offset": 0, 00:09:43.677 "data_size": 63488 00:09:43.677 }, 00:09:43.677 { 00:09:43.677 "name": "BaseBdev2", 00:09:43.677 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:43.677 "is_configured": true, 00:09:43.677 "data_offset": 2048, 00:09:43.677 "data_size": 63488 00:09:43.677 } 00:09:43.677 ] 00:09:43.677 }' 00:09:43.678 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.678 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.935 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:09:43.935 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.935 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.935 [2024-11-26 19:50:34.846475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:09:43.935 [2024-11-26 19:50:34.846804] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:09:43.935 [2024-11-26 19:50:34.846826] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:09:43.935 [2024-11-26 19:50:34.846868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:09:43.935 [2024-11-26 19:50:34.858281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:09:43.935 19:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.935 19:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:09:43.935 [2024-11-26 19:50:34.860407] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:45.309 "name": "raid_bdev1", 00:09:45.309 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:45.309 "strip_size_kb": 0, 00:09:45.309 "state": "online", 00:09:45.309 "raid_level": "raid1", 00:09:45.309 "superblock": true, 00:09:45.309 "num_base_bdevs": 2, 00:09:45.309 "num_base_bdevs_discovered": 2, 00:09:45.309 "num_base_bdevs_operational": 2, 00:09:45.309 "process": { 00:09:45.309 "type": "rebuild", 00:09:45.309 "target": "spare", 00:09:45.309 "progress": { 00:09:45.309 "blocks": 20480, 00:09:45.309 "percent": 32 00:09:45.309 } 00:09:45.309 }, 00:09:45.309 "base_bdevs_list": [ 00:09:45.309 { 00:09:45.309 "name": "spare", 00:09:45.309 "uuid": "79fe64c8-f749-548d-b901-0ffbc8d10712", 00:09:45.309 "is_configured": true, 00:09:45.309 "data_offset": 2048, 00:09:45.309 "data_size": 63488 00:09:45.309 }, 00:09:45.309 { 00:09:45.309 "name": "BaseBdev2", 00:09:45.309 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:45.309 "is_configured": true, 00:09:45.309 "data_offset": 2048, 00:09:45.309 "data_size": 63488 00:09:45.309 } 00:09:45.309 ] 00:09:45.309 }' 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.309 19:50:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.310 [2024-11-26 19:50:35.961921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:09:45.310 [2024-11-26 19:50:35.966869] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:09:45.310 [2024-11-26 19:50:35.966925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.310 [2024-11-26 19:50:35.966944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:09:45.310 [2024-11-26 19:50:35.966952] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.310 19:50:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.310 19:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.310 19:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.310 "name": "raid_bdev1", 00:09:45.310 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:45.310 "strip_size_kb": 0, 00:09:45.310 "state": "online", 00:09:45.310 "raid_level": "raid1", 00:09:45.310 "superblock": true, 00:09:45.310 "num_base_bdevs": 2, 00:09:45.310 "num_base_bdevs_discovered": 1, 00:09:45.310 "num_base_bdevs_operational": 1, 00:09:45.310 "base_bdevs_list": [ 00:09:45.310 { 00:09:45.310 "name": null, 00:09:45.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.310 "is_configured": false, 00:09:45.310 "data_offset": 0, 00:09:45.310 "data_size": 63488 00:09:45.310 }, 00:09:45.310 { 00:09:45.310 "name": "BaseBdev2", 00:09:45.310 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:45.310 "is_configured": true, 00:09:45.310 "data_offset": 2048, 00:09:45.310 "data_size": 63488 00:09:45.310 } 00:09:45.310 ] 00:09:45.310 }' 00:09:45.310 19:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.310 19:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.568 19:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:09:45.568 19:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.568 19:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.568 [2024-11-26 19:50:36.326094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:09:45.568 [2024-11-26 19:50:36.326158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.568 [2024-11-26 19:50:36.326179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:45.568 [2024-11-26 19:50:36.326189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.568 [2024-11-26 19:50:36.326630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.568 [2024-11-26 19:50:36.326650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:09:45.568 [2024-11-26 19:50:36.326735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:09:45.568 [2024-11-26 19:50:36.326748] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:09:45.568 [2024-11-26 19:50:36.326757] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:09:45.568 [2024-11-26 19:50:36.326778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:09:45.568 [2024-11-26 19:50:36.335960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:09:45.568 spare 00:09:45.568 19:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.568 19:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:09:45.568 [2024-11-26 19:50:36.337595] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:09:46.503 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:46.503 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:46.503 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:46.503 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:46.503 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:46.503 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.503 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.503 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.503 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.503 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.503 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:46.503 "name": "raid_bdev1", 00:09:46.503 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:46.503 "strip_size_kb": 0, 00:09:46.503 "state": "online", 00:09:46.503 "raid_level": "raid1", 00:09:46.503 "superblock": true, 00:09:46.503 "num_base_bdevs": 2, 00:09:46.503 "num_base_bdevs_discovered": 2, 00:09:46.503 "num_base_bdevs_operational": 2, 00:09:46.503 "process": { 00:09:46.503 "type": "rebuild", 00:09:46.503 "target": "spare", 00:09:46.503 "progress": { 00:09:46.503 "blocks": 20480, 00:09:46.503 "percent": 32 00:09:46.503 } 00:09:46.503 }, 00:09:46.503 "base_bdevs_list": [ 00:09:46.503 { 00:09:46.503 "name": "spare", 00:09:46.503 "uuid": "79fe64c8-f749-548d-b901-0ffbc8d10712", 00:09:46.503 "is_configured": true, 00:09:46.503 "data_offset": 2048, 00:09:46.503 "data_size": 63488 00:09:46.503 }, 00:09:46.503 { 00:09:46.503 "name": "BaseBdev2", 00:09:46.503 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:46.503 "is_configured": true, 00:09:46.503 "data_offset": 2048, 00:09:46.503 "data_size": 63488 00:09:46.503 } 00:09:46.503 ] 00:09:46.503 }' 00:09:46.503 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:46.503 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:46.504 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.762 [2024-11-26 19:50:37.444031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:09:46.762 [2024-11-26 19:50:37.544099] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:09:46.762 [2024-11-26 19:50:37.544152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.762 [2024-11-26 19:50:37.544168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:09:46.762 [2024-11-26 19:50:37.544174] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.762 "name": "raid_bdev1", 00:09:46.762 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:46.762 "strip_size_kb": 0, 00:09:46.762 "state": "online", 00:09:46.762 "raid_level": "raid1", 00:09:46.762 "superblock": true, 00:09:46.762 "num_base_bdevs": 2, 00:09:46.762 "num_base_bdevs_discovered": 1, 00:09:46.762 "num_base_bdevs_operational": 1, 00:09:46.762 "base_bdevs_list": [ 00:09:46.762 { 00:09:46.762 "name": null, 00:09:46.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.762 "is_configured": false, 00:09:46.762 "data_offset": 0, 00:09:46.762 "data_size": 63488 00:09:46.762 }, 00:09:46.762 { 00:09:46.762 "name": "BaseBdev2", 00:09:46.762 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:46.762 "is_configured": true, 00:09:46.762 "data_offset": 2048, 00:09:46.762 "data_size": 63488 00:09:46.762 } 00:09:46.762 ] 00:09:46.762 }' 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.762 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.021 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:09:47.021 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:47.021 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:09:47.021 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:09:47.021 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:47.021 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.021 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.021 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.021 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.021 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.021 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:47.021 "name": "raid_bdev1", 00:09:47.021 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:47.021 "strip_size_kb": 0, 00:09:47.021 "state": "online", 00:09:47.021 "raid_level": "raid1", 00:09:47.021 "superblock": true, 00:09:47.021 "num_base_bdevs": 2, 00:09:47.021 "num_base_bdevs_discovered": 1, 00:09:47.021 "num_base_bdevs_operational": 1, 00:09:47.021 "base_bdevs_list": [ 00:09:47.021 { 00:09:47.021 "name": null, 00:09:47.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.021 "is_configured": false, 00:09:47.021 "data_offset": 0, 00:09:47.021 "data_size": 63488 00:09:47.021 }, 00:09:47.021 { 00:09:47.021 "name": "BaseBdev2", 00:09:47.021 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:47.021 "is_configured": true, 00:09:47.021 "data_offset": 2048, 00:09:47.021 "data_size": 63488 00:09:47.021 } 00:09:47.021 ] 00:09:47.021 }' 00:09:47.021 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:47.021 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:09:47.021 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:47.280 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:09:47.280 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:09:47.280 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.280 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.280 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.280 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:09:47.280 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.280 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.280 [2024-11-26 19:50:37.995142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:09:47.280 [2024-11-26 19:50:37.995275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.280 [2024-11-26 19:50:37.995304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:47.280 [2024-11-26 19:50:37.995313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.280 [2024-11-26 19:50:37.995728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.280 [2024-11-26 19:50:37.995747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:47.280 [2024-11-26 19:50:37.995816] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:09:47.280 [2024-11-26 19:50:37.995829] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:09:47.280 [2024-11-26 19:50:37.995839] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:09:47.280 [2024-11-26 19:50:37.995848] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:09:47.280 BaseBdev1 00:09:47.280 19:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.280 19:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.214 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.214 "name": "raid_bdev1", 00:09:48.214 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:48.214 "strip_size_kb": 0, 00:09:48.214 "state": "online", 00:09:48.214 "raid_level": "raid1", 00:09:48.214 "superblock": true, 00:09:48.214 "num_base_bdevs": 2, 00:09:48.214 "num_base_bdevs_discovered": 1, 00:09:48.214 "num_base_bdevs_operational": 1, 00:09:48.214 "base_bdevs_list": [ 00:09:48.214 { 00:09:48.214 "name": null, 00:09:48.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.214 "is_configured": false, 00:09:48.214 "data_offset": 0, 00:09:48.214 "data_size": 63488 00:09:48.214 }, 00:09:48.215 { 00:09:48.215 "name": "BaseBdev2", 00:09:48.215 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:48.215 "is_configured": true, 00:09:48.215 "data_offset": 2048, 00:09:48.215 "data_size": 63488 00:09:48.215 } 00:09:48.215 ] 00:09:48.215 }' 00:09:48.215 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.215 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.473 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:09:48.473 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:48.473 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:09:48.473 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:09:48.473 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:48.473 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.473 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.473 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.473 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.473 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.473 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:48.473 "name": "raid_bdev1", 00:09:48.473 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:48.473 "strip_size_kb": 0, 00:09:48.473 "state": "online", 00:09:48.473 "raid_level": "raid1", 00:09:48.473 "superblock": true, 00:09:48.473 "num_base_bdevs": 2, 00:09:48.473 "num_base_bdevs_discovered": 1, 00:09:48.473 "num_base_bdevs_operational": 1, 00:09:48.473 "base_bdevs_list": [ 00:09:48.473 { 00:09:48.473 "name": null, 00:09:48.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.473 "is_configured": false, 00:09:48.473 "data_offset": 0, 00:09:48.473 "data_size": 63488 00:09:48.473 }, 00:09:48.473 { 00:09:48.473 "name": "BaseBdev2", 00:09:48.473 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:48.473 "is_configured": true, 00:09:48.473 "data_offset": 2048, 00:09:48.473 "data_size": 63488 00:09:48.473 } 00:09:48.473 ] 00:09:48.473 }' 00:09:48.473 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:48.473 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:09:48.473 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.731 [2024-11-26 19:50:39.435460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.731 [2024-11-26 19:50:39.435617] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:09:48.731 [2024-11-26 19:50:39.435632] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:09:48.731 request: 00:09:48.731 { 00:09:48.731 "base_bdev": "BaseBdev1", 00:09:48.731 "raid_bdev": "raid_bdev1", 00:09:48.731 "method": "bdev_raid_add_base_bdev", 00:09:48.731 "req_id": 1 00:09:48.731 } 00:09:48.731 Got JSON-RPC error response 00:09:48.731 response: 00:09:48.731 { 00:09:48.731 "code": -22, 00:09:48.731 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:09:48.731 } 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:48.731 19:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.665 "name": "raid_bdev1", 00:09:49.665 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:49.665 "strip_size_kb": 0, 00:09:49.665 "state": "online", 00:09:49.665 "raid_level": "raid1", 00:09:49.665 "superblock": true, 00:09:49.665 "num_base_bdevs": 2, 00:09:49.665 "num_base_bdevs_discovered": 1, 00:09:49.665 "num_base_bdevs_operational": 1, 00:09:49.665 "base_bdevs_list": [ 00:09:49.665 { 00:09:49.665 "name": null, 00:09:49.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.665 "is_configured": false, 00:09:49.665 "data_offset": 0, 00:09:49.665 "data_size": 63488 00:09:49.665 }, 00:09:49.665 { 00:09:49.665 "name": "BaseBdev2", 00:09:49.665 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:49.665 "is_configured": true, 00:09:49.665 "data_offset": 2048, 00:09:49.665 "data_size": 63488 00:09:49.665 } 00:09:49.665 ] 00:09:49.665 }' 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.665 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.923 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:09:49.923 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:49.923 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:09:49.923 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:09:49.923 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:49.923 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.923 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.923 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.923 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.923 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.923 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:49.923 "name": "raid_bdev1", 00:09:49.923 "uuid": "4499658f-9382-4744-9622-4d569d8787cf", 00:09:49.923 "strip_size_kb": 0, 00:09:49.923 "state": "online", 00:09:49.923 "raid_level": "raid1", 00:09:49.923 "superblock": true, 00:09:49.923 "num_base_bdevs": 2, 00:09:49.923 "num_base_bdevs_discovered": 1, 00:09:49.923 "num_base_bdevs_operational": 1, 00:09:49.923 "base_bdevs_list": [ 00:09:49.923 { 00:09:49.923 "name": null, 00:09:49.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.923 "is_configured": false, 00:09:49.923 "data_offset": 0, 00:09:49.923 "data_size": 63488 00:09:49.923 }, 00:09:49.923 { 00:09:49.923 "name": "BaseBdev2", 00:09:49.923 "uuid": "8264b9d2-a710-533b-9c1c-f815c68be3a2", 00:09:49.923 "is_configured": true, 00:09:49.923 "data_offset": 2048, 00:09:49.923 "data_size": 63488 00:09:49.923 } 00:09:49.923 ] 00:09:49.923 }' 00:09:49.923 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:49.923 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:09:49.923 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:50.180 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:09:50.180 19:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 73675 00:09:50.181 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73675 ']' 00:09:50.181 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 73675 00:09:50.181 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:50.181 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.181 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73675 00:09:50.181 killing process with pid 73675 00:09:50.181 Received shutdown signal, test time was about 60.000000 seconds 00:09:50.181 00:09:50.181 Latency(us) 00:09:50.181 [2024-11-26T19:50:41.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.181 [2024-11-26T19:50:41.118Z] =================================================================================================================== 00:09:50.181 [2024-11-26T19:50:41.118Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:09:50.181 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.181 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.181 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73675' 00:09:50.181 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 73675 00:09:50.181 [2024-11-26 19:50:40.881572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.181 19:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 73675 00:09:50.181 [2024-11-26 19:50:40.881685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.181 [2024-11-26 19:50:40.881734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.181 [2024-11-26 19:50:40.881744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:09:50.181 [2024-11-26 19:50:41.033170] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:50.746 19:50:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:09:50.746 00:09:50.746 real 0m20.429s 00:09:50.746 user 0m24.237s 00:09:50.746 sys 0m3.089s 00:09:50.746 ************************************ 00:09:50.746 END TEST raid_rebuild_test_sb 00:09:50.746 ************************************ 00:09:50.746 19:50:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.746 19:50:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.746 19:50:41 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:09:50.746 19:50:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:50.746 19:50:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.746 19:50:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.004 ************************************ 00:09:51.004 START TEST raid_rebuild_test_io 00:09:51.004 ************************************ 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74376 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74376 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 74376 ']' 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:51.004 19:50:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:09:51.004 I/O size of 3145728 is greater than zero copy threshold (65536). 00:09:51.004 Zero copy mechanism will not be used. 00:09:51.004 [2024-11-26 19:50:41.757850] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:09:51.004 [2024-11-26 19:50:41.758028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74376 ] 00:09:51.004 [2024-11-26 19:50:41.919277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.262 [2024-11-26 19:50:42.015353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.262 [2024-11-26 19:50:42.133665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.262 [2024-11-26 19:50:42.133691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.828 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.828 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:09:51.828 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:09:51.828 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:51.828 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.828 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:51.828 BaseBdev1_malloc 00:09:51.828 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.828 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:09:51.828 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.828 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:51.828 [2024-11-26 19:50:42.691859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:09:51.828 [2024-11-26 19:50:42.692030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.828 [2024-11-26 19:50:42.692068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:51.828 [2024-11-26 19:50:42.692129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.828 [2024-11-26 19:50:42.694001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.828 [2024-11-26 19:50:42.694034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:51.828 BaseBdev1 00:09:51.828 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.828 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:09:51.829 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:51.829 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.829 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:51.829 BaseBdev2_malloc 00:09:51.829 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.829 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:09:51.829 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.829 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:51.829 [2024-11-26 19:50:42.729068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:09:51.829 [2024-11-26 19:50:42.729186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.829 [2024-11-26 19:50:42.729221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:51.829 [2024-11-26 19:50:42.729267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.829 [2024-11-26 19:50:42.731106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.829 [2024-11-26 19:50:42.731133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:51.829 BaseBdev2 00:09:51.829 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.829 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:09:51.829 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.829 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:52.088 spare_malloc 00:09:52.088 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.088 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:09:52.088 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.088 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:52.088 spare_delay 00:09:52.088 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.088 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:09:52.088 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.088 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:52.088 [2024-11-26 19:50:42.782456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:09:52.088 [2024-11-26 19:50:42.782500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.088 [2024-11-26 19:50:42.782514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:52.088 [2024-11-26 19:50:42.782523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.088 [2024-11-26 19:50:42.784355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.088 [2024-11-26 19:50:42.784381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:09:52.088 spare 00:09:52.088 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.088 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:52.089 [2024-11-26 19:50:42.790503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.089 [2024-11-26 19:50:42.792085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.089 [2024-11-26 19:50:42.792154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:52.089 [2024-11-26 19:50:42.792165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:52.089 [2024-11-26 19:50:42.792475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:52.089 [2024-11-26 19:50:42.792625] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:52.089 [2024-11-26 19:50:42.792650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:52.089 [2024-11-26 19:50:42.792829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.089 "name": "raid_bdev1", 00:09:52.089 "uuid": "baff3d2d-f244-410b-be92-4aa9a68d3f39", 00:09:52.089 "strip_size_kb": 0, 00:09:52.089 "state": "online", 00:09:52.089 "raid_level": "raid1", 00:09:52.089 "superblock": false, 00:09:52.089 "num_base_bdevs": 2, 00:09:52.089 "num_base_bdevs_discovered": 2, 00:09:52.089 "num_base_bdevs_operational": 2, 00:09:52.089 "base_bdevs_list": [ 00:09:52.089 { 00:09:52.089 "name": "BaseBdev1", 00:09:52.089 "uuid": "3ae805be-bd2d-5f95-8abf-d343da0b3a58", 00:09:52.089 "is_configured": true, 00:09:52.089 "data_offset": 0, 00:09:52.089 "data_size": 65536 00:09:52.089 }, 00:09:52.089 { 00:09:52.089 "name": "BaseBdev2", 00:09:52.089 "uuid": "0e44ecc1-7266-5e73-a18c-26207ea8743e", 00:09:52.089 "is_configured": true, 00:09:52.089 "data_offset": 0, 00:09:52.089 "data_size": 65536 00:09:52.089 } 00:09:52.089 ] 00:09:52.089 }' 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.089 19:50:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:09:52.347 [2024-11-26 19:50:43.134845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:52.347 [2024-11-26 19:50:43.206574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.347 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.348 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.348 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.348 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.348 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.348 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.348 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:52.348 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.348 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.348 "name": "raid_bdev1", 00:09:52.348 "uuid": "baff3d2d-f244-410b-be92-4aa9a68d3f39", 00:09:52.348 "strip_size_kb": 0, 00:09:52.348 "state": "online", 00:09:52.348 "raid_level": "raid1", 00:09:52.348 "superblock": false, 00:09:52.348 "num_base_bdevs": 2, 00:09:52.348 "num_base_bdevs_discovered": 1, 00:09:52.348 "num_base_bdevs_operational": 1, 00:09:52.348 "base_bdevs_list": [ 00:09:52.348 { 00:09:52.348 "name": null, 00:09:52.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.348 "is_configured": false, 00:09:52.348 "data_offset": 0, 00:09:52.348 "data_size": 65536 00:09:52.348 }, 00:09:52.348 { 00:09:52.348 "name": "BaseBdev2", 00:09:52.348 "uuid": "0e44ecc1-7266-5e73-a18c-26207ea8743e", 00:09:52.348 "is_configured": true, 00:09:52.348 "data_offset": 0, 00:09:52.348 "data_size": 65536 00:09:52.348 } 00:09:52.348 ] 00:09:52.348 }' 00:09:52.348 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.348 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:52.606 I/O size of 3145728 is greater than zero copy threshold (65536). 00:09:52.606 Zero copy mechanism will not be used. 00:09:52.606 Running I/O for 60 seconds... 00:09:52.606 [2024-11-26 19:50:43.291581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:52.606 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:09:52.606 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.606 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:52.606 [2024-11-26 19:50:43.530337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:09:52.864 19:50:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.864 19:50:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:09:52.864 [2024-11-26 19:50:43.585941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:09:52.864 [2024-11-26 19:50:43.587703] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:09:52.864 [2024-11-26 19:50:43.689088] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:09:52.864 [2024-11-26 19:50:43.689454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:09:53.122 [2024-11-26 19:50:43.907421] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:09:53.122 [2024-11-26 19:50:43.907829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:09:53.381 [2024-11-26 19:50:44.150962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:09:53.381 [2024-11-26 19:50:44.258469] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:09:53.639 160.00 IOPS, 480.00 MiB/s [2024-11-26T19:50:44.576Z] 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:53.639 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:53.639 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:53.639 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:53.639 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:53.639 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.639 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.639 19:50:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.639 19:50:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.897 [2024-11-26 19:50:44.595358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:53.897 "name": "raid_bdev1", 00:09:53.897 "uuid": "baff3d2d-f244-410b-be92-4aa9a68d3f39", 00:09:53.897 "strip_size_kb": 0, 00:09:53.897 "state": "online", 00:09:53.897 "raid_level": "raid1", 00:09:53.897 "superblock": false, 00:09:53.897 "num_base_bdevs": 2, 00:09:53.897 "num_base_bdevs_discovered": 2, 00:09:53.897 "num_base_bdevs_operational": 2, 00:09:53.897 "process": { 00:09:53.897 "type": "rebuild", 00:09:53.897 "target": "spare", 00:09:53.897 "progress": { 00:09:53.897 "blocks": 12288, 00:09:53.897 "percent": 18 00:09:53.897 } 00:09:53.897 }, 00:09:53.897 "base_bdevs_list": [ 00:09:53.897 { 00:09:53.897 "name": "spare", 00:09:53.897 "uuid": "cccb27a0-b230-5efb-929b-dec4da2f446b", 00:09:53.897 "is_configured": true, 00:09:53.897 "data_offset": 0, 00:09:53.897 "data_size": 65536 00:09:53.897 }, 00:09:53.897 { 00:09:53.897 "name": "BaseBdev2", 00:09:53.897 "uuid": "0e44ecc1-7266-5e73-a18c-26207ea8743e", 00:09:53.897 "is_configured": true, 00:09:53.897 "data_offset": 0, 00:09:53.897 "data_size": 65536 00:09:53.897 } 00:09:53.897 ] 00:09:53.897 }' 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:53.897 [2024-11-26 19:50:44.665640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:09:53.897 [2024-11-26 19:50:44.710862] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:09:53.897 [2024-11-26 19:50:44.721483] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:09:53.897 [2024-11-26 19:50:44.733058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.897 [2024-11-26 19:50:44.733161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:09:53.897 [2024-11-26 19:50:44.733178] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:09:53.897 [2024-11-26 19:50:44.755193] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.897 "name": "raid_bdev1", 00:09:53.897 "uuid": "baff3d2d-f244-410b-be92-4aa9a68d3f39", 00:09:53.897 "strip_size_kb": 0, 00:09:53.897 "state": "online", 00:09:53.897 "raid_level": "raid1", 00:09:53.897 "superblock": false, 00:09:53.897 "num_base_bdevs": 2, 00:09:53.897 "num_base_bdevs_discovered": 1, 00:09:53.897 "num_base_bdevs_operational": 1, 00:09:53.897 "base_bdevs_list": [ 00:09:53.897 { 00:09:53.897 "name": null, 00:09:53.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.897 "is_configured": false, 00:09:53.897 "data_offset": 0, 00:09:53.897 "data_size": 65536 00:09:53.897 }, 00:09:53.897 { 00:09:53.897 "name": "BaseBdev2", 00:09:53.897 "uuid": "0e44ecc1-7266-5e73-a18c-26207ea8743e", 00:09:53.897 "is_configured": true, 00:09:53.897 "data_offset": 0, 00:09:53.897 "data_size": 65536 00:09:53.897 } 00:09:53.897 ] 00:09:53.897 }' 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.897 19:50:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:54.156 19:50:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:09:54.156 19:50:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:54.156 19:50:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:09:54.156 19:50:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:09:54.156 19:50:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:54.156 19:50:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.156 19:50:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.156 19:50:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.156 19:50:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:54.414 19:50:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.414 19:50:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:54.414 "name": "raid_bdev1", 00:09:54.414 "uuid": "baff3d2d-f244-410b-be92-4aa9a68d3f39", 00:09:54.414 "strip_size_kb": 0, 00:09:54.414 "state": "online", 00:09:54.414 "raid_level": "raid1", 00:09:54.414 "superblock": false, 00:09:54.414 "num_base_bdevs": 2, 00:09:54.414 "num_base_bdevs_discovered": 1, 00:09:54.414 "num_base_bdevs_operational": 1, 00:09:54.414 "base_bdevs_list": [ 00:09:54.414 { 00:09:54.414 "name": null, 00:09:54.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.414 "is_configured": false, 00:09:54.414 "data_offset": 0, 00:09:54.414 "data_size": 65536 00:09:54.414 }, 00:09:54.414 { 00:09:54.414 "name": "BaseBdev2", 00:09:54.414 "uuid": "0e44ecc1-7266-5e73-a18c-26207ea8743e", 00:09:54.414 "is_configured": true, 00:09:54.414 "data_offset": 0, 00:09:54.414 "data_size": 65536 00:09:54.414 } 00:09:54.414 ] 00:09:54.414 }' 00:09:54.414 19:50:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:54.414 19:50:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:09:54.414 19:50:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:54.414 19:50:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:09:54.414 19:50:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:09:54.414 19:50:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.414 19:50:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:54.414 [2024-11-26 19:50:45.188969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:09:54.414 19:50:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.414 19:50:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:09:54.414 [2024-11-26 19:50:45.232756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:54.414 [2024-11-26 19:50:45.234476] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:09:54.414 186.00 IOPS, 558.00 MiB/s [2024-11-26T19:50:45.351Z] [2024-11-26 19:50:45.340885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:09:54.414 [2024-11-26 19:50:45.341385] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:09:54.672 [2024-11-26 19:50:45.559473] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:09:54.672 [2024-11-26 19:50:45.559732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:09:54.930 [2024-11-26 19:50:45.795719] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:09:54.930 [2024-11-26 19:50:45.796110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:09:55.187 [2024-11-26 19:50:46.008779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:55.445 [2024-11-26 19:50:46.228869] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:55.445 "name": "raid_bdev1", 00:09:55.445 "uuid": "baff3d2d-f244-410b-be92-4aa9a68d3f39", 00:09:55.445 "strip_size_kb": 0, 00:09:55.445 "state": "online", 00:09:55.445 "raid_level": "raid1", 00:09:55.445 "superblock": false, 00:09:55.445 "num_base_bdevs": 2, 00:09:55.445 "num_base_bdevs_discovered": 2, 00:09:55.445 "num_base_bdevs_operational": 2, 00:09:55.445 "process": { 00:09:55.445 "type": "rebuild", 00:09:55.445 "target": "spare", 00:09:55.445 "progress": { 00:09:55.445 "blocks": 14336, 00:09:55.445 "percent": 21 00:09:55.445 } 00:09:55.445 }, 00:09:55.445 "base_bdevs_list": [ 00:09:55.445 { 00:09:55.445 "name": "spare", 00:09:55.445 "uuid": "cccb27a0-b230-5efb-929b-dec4da2f446b", 00:09:55.445 "is_configured": true, 00:09:55.445 "data_offset": 0, 00:09:55.445 "data_size": 65536 00:09:55.445 }, 00:09:55.445 { 00:09:55.445 "name": "BaseBdev2", 00:09:55.445 "uuid": "0e44ecc1-7266-5e73-a18c-26207ea8743e", 00:09:55.445 "is_configured": true, 00:09:55.445 "data_offset": 0, 00:09:55.445 "data_size": 65536 00:09:55.445 } 00:09:55.445 ] 00:09:55.445 }' 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:55.445 157.33 IOPS, 472.00 MiB/s [2024-11-26T19:50:46.382Z] 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=316 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.445 19:50:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:55.446 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.446 19:50:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.446 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:55.446 "name": "raid_bdev1", 00:09:55.446 "uuid": "baff3d2d-f244-410b-be92-4aa9a68d3f39", 00:09:55.446 "strip_size_kb": 0, 00:09:55.446 "state": "online", 00:09:55.446 "raid_level": "raid1", 00:09:55.446 "superblock": false, 00:09:55.446 "num_base_bdevs": 2, 00:09:55.446 "num_base_bdevs_discovered": 2, 00:09:55.446 "num_base_bdevs_operational": 2, 00:09:55.446 "process": { 00:09:55.446 "type": "rebuild", 00:09:55.446 "target": "spare", 00:09:55.446 "progress": { 00:09:55.446 "blocks": 14336, 00:09:55.446 "percent": 21 00:09:55.446 } 00:09:55.446 }, 00:09:55.446 "base_bdevs_list": [ 00:09:55.446 { 00:09:55.446 "name": "spare", 00:09:55.446 "uuid": "cccb27a0-b230-5efb-929b-dec4da2f446b", 00:09:55.446 "is_configured": true, 00:09:55.446 "data_offset": 0, 00:09:55.446 "data_size": 65536 00:09:55.446 }, 00:09:55.446 { 00:09:55.446 "name": "BaseBdev2", 00:09:55.446 "uuid": "0e44ecc1-7266-5e73-a18c-26207ea8743e", 00:09:55.446 "is_configured": true, 00:09:55.446 "data_offset": 0, 00:09:55.446 "data_size": 65536 00:09:55.446 } 00:09:55.446 ] 00:09:55.446 }' 00:09:55.446 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:55.703 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:55.703 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:55.703 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:55.703 19:50:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:09:55.703 [2024-11-26 19:50:46.442166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:09:55.703 [2024-11-26 19:50:46.442469] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:09:55.961 [2024-11-26 19:50:46.791920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:09:56.218 [2024-11-26 19:50:46.995751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:09:56.732 130.50 IOPS, 391.50 MiB/s [2024-11-26T19:50:47.669Z] 19:50:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:09:56.732 19:50:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:56.732 19:50:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:56.732 19:50:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:56.732 19:50:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:56.732 19:50:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:56.732 19:50:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.732 19:50:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.732 19:50:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.732 19:50:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:56.732 19:50:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.733 19:50:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:56.733 "name": "raid_bdev1", 00:09:56.733 "uuid": "baff3d2d-f244-410b-be92-4aa9a68d3f39", 00:09:56.733 "strip_size_kb": 0, 00:09:56.733 "state": "online", 00:09:56.733 "raid_level": "raid1", 00:09:56.733 "superblock": false, 00:09:56.733 "num_base_bdevs": 2, 00:09:56.733 "num_base_bdevs_discovered": 2, 00:09:56.733 "num_base_bdevs_operational": 2, 00:09:56.733 "process": { 00:09:56.733 "type": "rebuild", 00:09:56.733 "target": "spare", 00:09:56.733 "progress": { 00:09:56.733 "blocks": 30720, 00:09:56.733 "percent": 46 00:09:56.733 } 00:09:56.733 }, 00:09:56.733 "base_bdevs_list": [ 00:09:56.733 { 00:09:56.733 "name": "spare", 00:09:56.733 "uuid": "cccb27a0-b230-5efb-929b-dec4da2f446b", 00:09:56.733 "is_configured": true, 00:09:56.733 "data_offset": 0, 00:09:56.733 "data_size": 65536 00:09:56.733 }, 00:09:56.733 { 00:09:56.733 "name": "BaseBdev2", 00:09:56.733 "uuid": "0e44ecc1-7266-5e73-a18c-26207ea8743e", 00:09:56.733 "is_configured": true, 00:09:56.733 "data_offset": 0, 00:09:56.733 "data_size": 65536 00:09:56.733 } 00:09:56.733 ] 00:09:56.733 }' 00:09:56.733 19:50:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:56.733 19:50:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:56.733 19:50:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:56.733 19:50:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:56.733 19:50:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:09:56.733 [2024-11-26 19:50:47.533924] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:09:56.991 [2024-11-26 19:50:47.741424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:09:57.250 [2024-11-26 19:50:48.073890] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:09:57.767 115.40 IOPS, 346.20 MiB/s [2024-11-26T19:50:48.704Z] 19:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:57.767 "name": "raid_bdev1", 00:09:57.767 "uuid": "baff3d2d-f244-410b-be92-4aa9a68d3f39", 00:09:57.767 "strip_size_kb": 0, 00:09:57.767 "state": "online", 00:09:57.767 "raid_level": "raid1", 00:09:57.767 "superblock": false, 00:09:57.767 "num_base_bdevs": 2, 00:09:57.767 "num_base_bdevs_discovered": 2, 00:09:57.767 "num_base_bdevs_operational": 2, 00:09:57.767 "process": { 00:09:57.767 "type": "rebuild", 00:09:57.767 "target": "spare", 00:09:57.767 "progress": { 00:09:57.767 "blocks": 45056, 00:09:57.767 "percent": 68 00:09:57.767 } 00:09:57.767 }, 00:09:57.767 "base_bdevs_list": [ 00:09:57.767 { 00:09:57.767 "name": "spare", 00:09:57.767 "uuid": "cccb27a0-b230-5efb-929b-dec4da2f446b", 00:09:57.767 "is_configured": true, 00:09:57.767 "data_offset": 0, 00:09:57.767 "data_size": 65536 00:09:57.767 }, 00:09:57.767 { 00:09:57.767 "name": "BaseBdev2", 00:09:57.767 "uuid": "0e44ecc1-7266-5e73-a18c-26207ea8743e", 00:09:57.767 "is_configured": true, 00:09:57.767 "data_offset": 0, 00:09:57.767 "data_size": 65536 00:09:57.767 } 00:09:57.767 ] 00:09:57.767 }' 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:57.767 19:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:09:58.702 [2024-11-26 19:50:49.269968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:09:58.702 104.83 IOPS, 314.50 MiB/s [2024-11-26T19:50:49.639Z] 19:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:09:58.702 19:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:58.702 19:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:58.702 19:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:58.702 19:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:58.702 19:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:58.959 19:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.959 19:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.959 19:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.959 19:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:09:58.959 19:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.959 19:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:58.959 "name": "raid_bdev1", 00:09:58.959 "uuid": "baff3d2d-f244-410b-be92-4aa9a68d3f39", 00:09:58.959 "strip_size_kb": 0, 00:09:58.959 "state": "online", 00:09:58.959 "raid_level": "raid1", 00:09:58.959 "superblock": false, 00:09:58.959 "num_base_bdevs": 2, 00:09:58.959 "num_base_bdevs_discovered": 2, 00:09:58.959 "num_base_bdevs_operational": 2, 00:09:58.959 "process": { 00:09:58.959 "type": "rebuild", 00:09:58.959 "target": "spare", 00:09:58.959 "progress": { 00:09:58.959 "blocks": 63488, 00:09:58.959 "percent": 96 00:09:58.959 } 00:09:58.959 }, 00:09:58.959 "base_bdevs_list": [ 00:09:58.959 { 00:09:58.959 "name": "spare", 00:09:58.959 "uuid": "cccb27a0-b230-5efb-929b-dec4da2f446b", 00:09:58.959 "is_configured": true, 00:09:58.959 "data_offset": 0, 00:09:58.959 "data_size": 65536 00:09:58.959 }, 00:09:58.959 { 00:09:58.959 "name": "BaseBdev2", 00:09:58.959 "uuid": "0e44ecc1-7266-5e73-a18c-26207ea8743e", 00:09:58.959 "is_configured": true, 00:09:58.959 "data_offset": 0, 00:09:58.959 "data_size": 65536 00:09:58.959 } 00:09:58.959 ] 00:09:58.959 }' 00:09:58.959 19:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:58.959 19:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:58.959 [2024-11-26 19:50:49.699844] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:09:58.959 19:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:58.959 19:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:58.959 19:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:09:58.959 [2024-11-26 19:50:49.804659] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:09:58.959 [2024-11-26 19:50:49.806514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.092 96.57 IOPS, 289.71 MiB/s [2024-11-26T19:50:51.029Z] 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:00.092 "name": "raid_bdev1", 00:10:00.092 "uuid": "baff3d2d-f244-410b-be92-4aa9a68d3f39", 00:10:00.092 "strip_size_kb": 0, 00:10:00.092 "state": "online", 00:10:00.092 "raid_level": "raid1", 00:10:00.092 "superblock": false, 00:10:00.092 "num_base_bdevs": 2, 00:10:00.092 "num_base_bdevs_discovered": 2, 00:10:00.092 "num_base_bdevs_operational": 2, 00:10:00.092 "base_bdevs_list": [ 00:10:00.092 { 00:10:00.092 "name": "spare", 00:10:00.092 "uuid": "cccb27a0-b230-5efb-929b-dec4da2f446b", 00:10:00.092 "is_configured": true, 00:10:00.092 "data_offset": 0, 00:10:00.092 "data_size": 65536 00:10:00.092 }, 00:10:00.092 { 00:10:00.092 "name": "BaseBdev2", 00:10:00.092 "uuid": "0e44ecc1-7266-5e73-a18c-26207ea8743e", 00:10:00.092 "is_configured": true, 00:10:00.092 "data_offset": 0, 00:10:00.092 "data_size": 65536 00:10:00.092 } 00:10:00.092 ] 00:10:00.092 }' 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:00.092 "name": "raid_bdev1", 00:10:00.092 "uuid": "baff3d2d-f244-410b-be92-4aa9a68d3f39", 00:10:00.092 "strip_size_kb": 0, 00:10:00.092 "state": "online", 00:10:00.092 "raid_level": "raid1", 00:10:00.092 "superblock": false, 00:10:00.092 "num_base_bdevs": 2, 00:10:00.092 "num_base_bdevs_discovered": 2, 00:10:00.092 "num_base_bdevs_operational": 2, 00:10:00.092 "base_bdevs_list": [ 00:10:00.092 { 00:10:00.092 "name": "spare", 00:10:00.092 "uuid": "cccb27a0-b230-5efb-929b-dec4da2f446b", 00:10:00.092 "is_configured": true, 00:10:00.092 "data_offset": 0, 00:10:00.092 "data_size": 65536 00:10:00.092 }, 00:10:00.092 { 00:10:00.092 "name": "BaseBdev2", 00:10:00.092 "uuid": "0e44ecc1-7266-5e73-a18c-26207ea8743e", 00:10:00.092 "is_configured": true, 00:10:00.092 "data_offset": 0, 00:10:00.092 "data_size": 65536 00:10:00.092 } 00:10:00.092 ] 00:10:00.092 }' 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.092 "name": "raid_bdev1", 00:10:00.092 "uuid": "baff3d2d-f244-410b-be92-4aa9a68d3f39", 00:10:00.092 "strip_size_kb": 0, 00:10:00.092 "state": "online", 00:10:00.092 "raid_level": "raid1", 00:10:00.092 "superblock": false, 00:10:00.092 "num_base_bdevs": 2, 00:10:00.092 "num_base_bdevs_discovered": 2, 00:10:00.092 "num_base_bdevs_operational": 2, 00:10:00.092 "base_bdevs_list": [ 00:10:00.092 { 00:10:00.092 "name": "spare", 00:10:00.092 "uuid": "cccb27a0-b230-5efb-929b-dec4da2f446b", 00:10:00.092 "is_configured": true, 00:10:00.092 "data_offset": 0, 00:10:00.092 "data_size": 65536 00:10:00.092 }, 00:10:00.092 { 00:10:00.092 "name": "BaseBdev2", 00:10:00.092 "uuid": "0e44ecc1-7266-5e73-a18c-26207ea8743e", 00:10:00.092 "is_configured": true, 00:10:00.092 "data_offset": 0, 00:10:00.092 "data_size": 65536 00:10:00.092 } 00:10:00.092 ] 00:10:00.092 }' 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.092 19:50:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:00.351 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:00.351 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.351 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:00.351 [2024-11-26 19:50:51.275294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.351 [2024-11-26 19:50:51.275319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.611 90.25 IOPS, 270.75 MiB/s 00:10:00.611 Latency(us) 00:10:00.611 [2024-11-26T19:50:51.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.611 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:10:00.611 raid_bdev1 : 8.01 90.39 271.17 0.00 0.00 15430.45 248.91 112116.97 00:10:00.611 [2024-11-26T19:50:51.548Z] =================================================================================================================== 00:10:00.611 [2024-11-26T19:50:51.548Z] Total : 90.39 271.17 0.00 0.00 15430.45 248.91 112116.97 00:10:00.611 [2024-11-26 19:50:51.315706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.611 [2024-11-26 19:50:51.315756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.611 [2024-11-26 19:50:51.315836] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.611 [2024-11-26 19:50:51.315845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:00.611 { 00:10:00.611 "results": [ 00:10:00.611 { 00:10:00.611 "job": "raid_bdev1", 00:10:00.611 "core_mask": "0x1", 00:10:00.611 "workload": "randrw", 00:10:00.611 "percentage": 50, 00:10:00.611 "status": "finished", 00:10:00.611 "queue_depth": 2, 00:10:00.611 "io_size": 3145728, 00:10:00.611 "runtime": 8.00987, 00:10:00.611 "iops": 90.3884832088411, 00:10:00.611 "mibps": 271.1654496265233, 00:10:00.611 "io_failed": 0, 00:10:00.611 "io_timeout": 0, 00:10:00.611 "avg_latency_us": 15430.44732681683, 00:10:00.611 "min_latency_us": 248.91076923076923, 00:10:00.611 "max_latency_us": 112116.97230769231 00:10:00.611 } 00:10:00.611 ], 00:10:00.611 "core_count": 1 00:10:00.611 } 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:00.611 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:10:00.870 /dev/nbd0 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:00.870 1+0 records in 00:10:00.870 1+0 records out 00:10:00.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364481 s, 11.2 MB/s 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:00.870 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:10:00.870 /dev/nbd1 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:01.129 1+0 records in 00:10:01.129 1+0 records out 00:10:01.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392004 s, 10.4 MB/s 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:01.129 19:50:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:01.387 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:01.387 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:01.388 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:01.388 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:01.388 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:01.388 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:01.388 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:10:01.388 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:10:01.388 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:01.388 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:01.388 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:01.388 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:01.388 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:10:01.388 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:01.388 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 74376 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 74376 ']' 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 74376 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74376 00:10:01.646 killing process with pid 74376 00:10:01.646 Received shutdown signal, test time was about 9.118657 seconds 00:10:01.646 00:10:01.646 Latency(us) 00:10:01.646 [2024-11-26T19:50:52.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.646 [2024-11-26T19:50:52.583Z] =================================================================================================================== 00:10:01.646 [2024-11-26T19:50:52.583Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74376' 00:10:01.646 19:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 74376 00:10:01.646 [2024-11-26 19:50:52.412026] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.647 19:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 74376 00:10:01.647 [2024-11-26 19:50:52.528875] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:10:02.581 00:10:02.581 real 0m11.488s 00:10:02.581 user 0m14.161s 00:10:02.581 sys 0m1.098s 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.581 ************************************ 00:10:02.581 END TEST raid_rebuild_test_io 00:10:02.581 ************************************ 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:02.581 19:50:53 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:10:02.581 19:50:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:02.581 19:50:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.581 19:50:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.581 ************************************ 00:10:02.581 START TEST raid_rebuild_test_sb_io 00:10:02.581 ************************************ 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74754 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74754 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 74754 ']' 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:02.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.581 19:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:02.581 [2024-11-26 19:50:53.288284] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:10:02.581 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:02.581 Zero copy mechanism will not be used. 00:10:02.581 [2024-11-26 19:50:53.288789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74754 ] 00:10:02.581 [2024-11-26 19:50:53.453380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.840 [2024-11-26 19:50:53.552301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.840 [2024-11-26 19:50:53.671895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.840 [2024-11-26 19:50:53.671943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.410 BaseBdev1_malloc 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.410 [2024-11-26 19:50:54.169309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:03.410 [2024-11-26 19:50:54.169491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.410 [2024-11-26 19:50:54.169560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:03.410 [2024-11-26 19:50:54.169610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.410 [2024-11-26 19:50:54.171565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.410 [2024-11-26 19:50:54.171671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:03.410 BaseBdev1 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.410 BaseBdev2_malloc 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.410 [2024-11-26 19:50:54.202652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:03.410 [2024-11-26 19:50:54.202775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.410 [2024-11-26 19:50:54.202807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:03.410 [2024-11-26 19:50:54.202856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.410 [2024-11-26 19:50:54.204696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.410 [2024-11-26 19:50:54.204788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:03.410 BaseBdev2 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.410 spare_malloc 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.410 spare_delay 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.410 [2024-11-26 19:50:54.256129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:03.410 [2024-11-26 19:50:54.256288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.410 [2024-11-26 19:50:54.256310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:03.410 [2024-11-26 19:50:54.256321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.410 [2024-11-26 19:50:54.258238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.410 [2024-11-26 19:50:54.258352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:03.410 spare 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.410 [2024-11-26 19:50:54.264179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.410 [2024-11-26 19:50:54.265855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.410 [2024-11-26 19:50:54.266074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:03.410 [2024-11-26 19:50:54.266135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:03.410 [2024-11-26 19:50:54.266395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:03.410 [2024-11-26 19:50:54.266538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:03.410 [2024-11-26 19:50:54.266546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:03.410 [2024-11-26 19:50:54.266674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.410 "name": "raid_bdev1", 00:10:03.410 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:03.410 "strip_size_kb": 0, 00:10:03.410 "state": "online", 00:10:03.410 "raid_level": "raid1", 00:10:03.410 "superblock": true, 00:10:03.410 "num_base_bdevs": 2, 00:10:03.410 "num_base_bdevs_discovered": 2, 00:10:03.410 "num_base_bdevs_operational": 2, 00:10:03.410 "base_bdevs_list": [ 00:10:03.410 { 00:10:03.410 "name": "BaseBdev1", 00:10:03.410 "uuid": "94dd733b-6900-5530-bb38-f3e38a7acf82", 00:10:03.410 "is_configured": true, 00:10:03.410 "data_offset": 2048, 00:10:03.410 "data_size": 63488 00:10:03.410 }, 00:10:03.410 { 00:10:03.410 "name": "BaseBdev2", 00:10:03.410 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:03.410 "is_configured": true, 00:10:03.410 "data_offset": 2048, 00:10:03.410 "data_size": 63488 00:10:03.410 } 00:10:03.410 ] 00:10:03.410 }' 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.410 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.669 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.669 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.669 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:03.669 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.669 [2024-11-26 19:50:54.596504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.927 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:03.928 [2024-11-26 19:50:54.664243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.928 "name": "raid_bdev1", 00:10:03.928 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:03.928 "strip_size_kb": 0, 00:10:03.928 "state": "online", 00:10:03.928 "raid_level": "raid1", 00:10:03.928 "superblock": true, 00:10:03.928 "num_base_bdevs": 2, 00:10:03.928 "num_base_bdevs_discovered": 1, 00:10:03.928 "num_base_bdevs_operational": 1, 00:10:03.928 "base_bdevs_list": [ 00:10:03.928 { 00:10:03.928 "name": null, 00:10:03.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.928 "is_configured": false, 00:10:03.928 "data_offset": 0, 00:10:03.928 "data_size": 63488 00:10:03.928 }, 00:10:03.928 { 00:10:03.928 "name": "BaseBdev2", 00:10:03.928 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:03.928 "is_configured": true, 00:10:03.928 "data_offset": 2048, 00:10:03.928 "data_size": 63488 00:10:03.928 } 00:10:03.928 ] 00:10:03.928 }' 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.928 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:03.928 [2024-11-26 19:50:54.757169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:03.928 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:03.928 Zero copy mechanism will not be used. 00:10:03.928 Running I/O for 60 seconds... 00:10:04.187 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:04.187 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.187 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:04.187 [2024-11-26 19:50:54.968403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:04.187 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.187 19:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:04.187 [2024-11-26 19:50:54.994852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:10:04.187 [2024-11-26 19:50:54.996633] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:04.187 [2024-11-26 19:50:55.110645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:04.187 [2024-11-26 19:50:55.111158] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:04.445 [2024-11-26 19:50:55.320040] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:04.445 [2024-11-26 19:50:55.320312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:05.012 [2024-11-26 19:50:55.662738] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:10:05.012 [2024-11-26 19:50:55.663385] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:10:05.012 139.00 IOPS, 417.00 MiB/s [2024-11-26T19:50:55.949Z] [2024-11-26 19:50:55.770541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:10:05.012 [2024-11-26 19:50:55.770831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:10:05.270 19:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:05.270 19:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:05.270 19:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:05.270 19:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:05.270 19:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:05.270 19:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.270 19:50:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.270 19:50:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:05.270 19:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.270 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.270 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:05.270 "name": "raid_bdev1", 00:10:05.270 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:05.270 "strip_size_kb": 0, 00:10:05.270 "state": "online", 00:10:05.270 "raid_level": "raid1", 00:10:05.270 "superblock": true, 00:10:05.270 "num_base_bdevs": 2, 00:10:05.270 "num_base_bdevs_discovered": 2, 00:10:05.270 "num_base_bdevs_operational": 2, 00:10:05.270 "process": { 00:10:05.271 "type": "rebuild", 00:10:05.271 "target": "spare", 00:10:05.271 "progress": { 00:10:05.271 "blocks": 10240, 00:10:05.271 "percent": 16 00:10:05.271 } 00:10:05.271 }, 00:10:05.271 "base_bdevs_list": [ 00:10:05.271 { 00:10:05.271 "name": "spare", 00:10:05.271 "uuid": "4d4bee72-e7f2-5b43-8cbc-e43de7aca254", 00:10:05.271 "is_configured": true, 00:10:05.271 "data_offset": 2048, 00:10:05.271 "data_size": 63488 00:10:05.271 }, 00:10:05.271 { 00:10:05.271 "name": "BaseBdev2", 00:10:05.271 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:05.271 "is_configured": true, 00:10:05.271 "data_offset": 2048, 00:10:05.271 "data_size": 63488 00:10:05.271 } 00:10:05.271 ] 00:10:05.271 }' 00:10:05.271 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:05.271 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:05.271 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:05.271 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:05.271 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:05.271 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.271 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:05.271 [2024-11-26 19:50:56.094493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:05.529 [2024-11-26 19:50:56.228914] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:05.529 [2024-11-26 19:50:56.237124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.529 [2024-11-26 19:50:56.237244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:05.529 [2024-11-26 19:50:56.237263] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:05.529 [2024-11-26 19:50:56.270020] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.529 "name": "raid_bdev1", 00:10:05.529 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:05.529 "strip_size_kb": 0, 00:10:05.529 "state": "online", 00:10:05.529 "raid_level": "raid1", 00:10:05.529 "superblock": true, 00:10:05.529 "num_base_bdevs": 2, 00:10:05.529 "num_base_bdevs_discovered": 1, 00:10:05.529 "num_base_bdevs_operational": 1, 00:10:05.529 "base_bdevs_list": [ 00:10:05.529 { 00:10:05.529 "name": null, 00:10:05.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.529 "is_configured": false, 00:10:05.529 "data_offset": 0, 00:10:05.529 "data_size": 63488 00:10:05.529 }, 00:10:05.529 { 00:10:05.529 "name": "BaseBdev2", 00:10:05.529 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:05.529 "is_configured": true, 00:10:05.529 "data_offset": 2048, 00:10:05.529 "data_size": 63488 00:10:05.529 } 00:10:05.529 ] 00:10:05.529 }' 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.529 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:05.788 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:05.788 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:05.788 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:05.788 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:05.788 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:05.788 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.788 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.788 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.788 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:05.788 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.788 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:05.788 "name": "raid_bdev1", 00:10:05.788 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:05.788 "strip_size_kb": 0, 00:10:05.788 "state": "online", 00:10:05.788 "raid_level": "raid1", 00:10:05.788 "superblock": true, 00:10:05.788 "num_base_bdevs": 2, 00:10:05.788 "num_base_bdevs_discovered": 1, 00:10:05.788 "num_base_bdevs_operational": 1, 00:10:05.788 "base_bdevs_list": [ 00:10:05.788 { 00:10:05.788 "name": null, 00:10:05.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.788 "is_configured": false, 00:10:05.788 "data_offset": 0, 00:10:05.788 "data_size": 63488 00:10:05.789 }, 00:10:05.789 { 00:10:05.789 "name": "BaseBdev2", 00:10:05.789 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:05.789 "is_configured": true, 00:10:05.789 "data_offset": 2048, 00:10:05.789 "data_size": 63488 00:10:05.789 } 00:10:05.789 ] 00:10:05.789 }' 00:10:05.789 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:05.789 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:05.789 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:05.789 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:05.789 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:05.789 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.789 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:05.789 [2024-11-26 19:50:56.697995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:06.048 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.048 19:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:06.048 [2024-11-26 19:50:56.751271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:06.048 [2024-11-26 19:50:56.752957] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:06.048 158.50 IOPS, 475.50 MiB/s [2024-11-26T19:50:56.985Z] [2024-11-26 19:50:56.876084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:06.313 [2024-11-26 19:50:57.089440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:06.313 [2024-11-26 19:50:57.089727] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:06.570 [2024-11-26 19:50:57.412891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:10:06.828 [2024-11-26 19:50:57.621744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:10:06.828 [2024-11-26 19:50:57.622073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:10:06.828 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:06.828 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:06.828 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:06.828 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:06.828 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:06.828 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.828 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.828 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.828 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:06.828 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.086 152.00 IOPS, 456.00 MiB/s [2024-11-26T19:50:58.023Z] 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:07.086 "name": "raid_bdev1", 00:10:07.086 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:07.086 "strip_size_kb": 0, 00:10:07.086 "state": "online", 00:10:07.086 "raid_level": "raid1", 00:10:07.086 "superblock": true, 00:10:07.086 "num_base_bdevs": 2, 00:10:07.086 "num_base_bdevs_discovered": 2, 00:10:07.086 "num_base_bdevs_operational": 2, 00:10:07.086 "process": { 00:10:07.086 "type": "rebuild", 00:10:07.086 "target": "spare", 00:10:07.086 "progress": { 00:10:07.086 "blocks": 10240, 00:10:07.086 "percent": 16 00:10:07.086 } 00:10:07.086 }, 00:10:07.086 "base_bdevs_list": [ 00:10:07.086 { 00:10:07.086 "name": "spare", 00:10:07.086 "uuid": "4d4bee72-e7f2-5b43-8cbc-e43de7aca254", 00:10:07.086 "is_configured": true, 00:10:07.086 "data_offset": 2048, 00:10:07.086 "data_size": 63488 00:10:07.086 }, 00:10:07.086 { 00:10:07.086 "name": "BaseBdev2", 00:10:07.086 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:07.086 "is_configured": true, 00:10:07.086 "data_offset": 2048, 00:10:07.086 "data_size": 63488 00:10:07.086 } 00:10:07.086 ] 00:10:07.086 }' 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:10:07.086 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=327 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:07.086 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.087 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.087 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:07.087 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.087 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.087 [2024-11-26 19:50:57.855242] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:10:07.087 [2024-11-26 19:50:57.855750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:10:07.087 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:07.087 "name": "raid_bdev1", 00:10:07.087 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:07.087 "strip_size_kb": 0, 00:10:07.087 "state": "online", 00:10:07.087 "raid_level": "raid1", 00:10:07.087 "superblock": true, 00:10:07.087 "num_base_bdevs": 2, 00:10:07.087 "num_base_bdevs_discovered": 2, 00:10:07.087 "num_base_bdevs_operational": 2, 00:10:07.087 "process": { 00:10:07.087 "type": "rebuild", 00:10:07.087 "target": "spare", 00:10:07.087 "progress": { 00:10:07.087 "blocks": 12288, 00:10:07.087 "percent": 19 00:10:07.087 } 00:10:07.087 }, 00:10:07.087 "base_bdevs_list": [ 00:10:07.087 { 00:10:07.087 "name": "spare", 00:10:07.087 "uuid": "4d4bee72-e7f2-5b43-8cbc-e43de7aca254", 00:10:07.087 "is_configured": true, 00:10:07.087 "data_offset": 2048, 00:10:07.087 "data_size": 63488 00:10:07.087 }, 00:10:07.087 { 00:10:07.087 "name": "BaseBdev2", 00:10:07.087 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:07.087 "is_configured": true, 00:10:07.087 "data_offset": 2048, 00:10:07.087 "data_size": 63488 00:10:07.087 } 00:10:07.087 ] 00:10:07.087 }' 00:10:07.087 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:07.087 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:07.087 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:07.087 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:07.087 19:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:07.087 [2024-11-26 19:50:57.965300] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:10:07.652 [2024-11-26 19:50:58.294420] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:10:07.652 [2024-11-26 19:50:58.503856] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:10:07.910 [2024-11-26 19:50:58.731403] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:10:07.910 129.25 IOPS, 387.75 MiB/s [2024-11-26T19:50:58.847Z] [2024-11-26 19:50:58.843957] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:10:07.910 [2024-11-26 19:50:58.844280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:10:08.167 19:50:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:08.167 19:50:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:08.167 19:50:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:08.167 19:50:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:08.167 19:50:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:08.167 19:50:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:08.167 19:50:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.167 19:50:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.167 19:50:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.167 19:50:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:08.167 19:50:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.167 19:50:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:08.167 "name": "raid_bdev1", 00:10:08.167 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:08.167 "strip_size_kb": 0, 00:10:08.167 "state": "online", 00:10:08.167 "raid_level": "raid1", 00:10:08.167 "superblock": true, 00:10:08.167 "num_base_bdevs": 2, 00:10:08.167 "num_base_bdevs_discovered": 2, 00:10:08.167 "num_base_bdevs_operational": 2, 00:10:08.167 "process": { 00:10:08.167 "type": "rebuild", 00:10:08.167 "target": "spare", 00:10:08.167 "progress": { 00:10:08.167 "blocks": 28672, 00:10:08.167 "percent": 45 00:10:08.167 } 00:10:08.167 }, 00:10:08.167 "base_bdevs_list": [ 00:10:08.167 { 00:10:08.167 "name": "spare", 00:10:08.167 "uuid": "4d4bee72-e7f2-5b43-8cbc-e43de7aca254", 00:10:08.167 "is_configured": true, 00:10:08.167 "data_offset": 2048, 00:10:08.167 "data_size": 63488 00:10:08.167 }, 00:10:08.167 { 00:10:08.167 "name": "BaseBdev2", 00:10:08.167 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:08.167 "is_configured": true, 00:10:08.167 "data_offset": 2048, 00:10:08.167 "data_size": 63488 00:10:08.167 } 00:10:08.167 ] 00:10:08.167 }' 00:10:08.167 19:50:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:08.167 19:50:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:08.167 19:50:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:08.167 19:50:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:08.167 19:50:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:08.521 [2024-11-26 19:50:59.193377] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:10:08.521 [2024-11-26 19:50:59.401315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:10:08.521 [2024-11-26 19:50:59.401638] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:10:09.091 [2024-11-26 19:50:59.743590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:10:09.349 110.40 IOPS, 331.20 MiB/s [2024-11-26T19:51:00.286Z] 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:09.349 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:09.349 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:09.349 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:09.349 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:09.349 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:09.349 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.349 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.349 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.349 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:09.349 [2024-11-26 19:51:00.054725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:10:09.349 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.349 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:09.349 "name": "raid_bdev1", 00:10:09.349 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:09.349 "strip_size_kb": 0, 00:10:09.349 "state": "online", 00:10:09.349 "raid_level": "raid1", 00:10:09.349 "superblock": true, 00:10:09.349 "num_base_bdevs": 2, 00:10:09.349 "num_base_bdevs_discovered": 2, 00:10:09.349 "num_base_bdevs_operational": 2, 00:10:09.349 "process": { 00:10:09.350 "type": "rebuild", 00:10:09.350 "target": "spare", 00:10:09.350 "progress": { 00:10:09.350 "blocks": 43008, 00:10:09.350 "percent": 67 00:10:09.350 } 00:10:09.350 }, 00:10:09.350 "base_bdevs_list": [ 00:10:09.350 { 00:10:09.350 "name": "spare", 00:10:09.350 "uuid": "4d4bee72-e7f2-5b43-8cbc-e43de7aca254", 00:10:09.350 "is_configured": true, 00:10:09.350 "data_offset": 2048, 00:10:09.350 "data_size": 63488 00:10:09.350 }, 00:10:09.350 { 00:10:09.350 "name": "BaseBdev2", 00:10:09.350 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:09.350 "is_configured": true, 00:10:09.350 "data_offset": 2048, 00:10:09.350 "data_size": 63488 00:10:09.350 } 00:10:09.350 ] 00:10:09.350 }' 00:10:09.350 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:09.350 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:09.350 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:09.350 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:09.350 19:51:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:09.608 [2024-11-26 19:51:00.482124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:10:10.124 97.67 IOPS, 293.00 MiB/s [2024-11-26T19:51:01.061Z] [2024-11-26 19:51:01.024591] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:10.382 [2024-11-26 19:51:01.124635] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:10.382 [2024-11-26 19:51:01.126708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:10.382 "name": "raid_bdev1", 00:10:10.382 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:10.382 "strip_size_kb": 0, 00:10:10.382 "state": "online", 00:10:10.382 "raid_level": "raid1", 00:10:10.382 "superblock": true, 00:10:10.382 "num_base_bdevs": 2, 00:10:10.382 "num_base_bdevs_discovered": 2, 00:10:10.382 "num_base_bdevs_operational": 2, 00:10:10.382 "base_bdevs_list": [ 00:10:10.382 { 00:10:10.382 "name": "spare", 00:10:10.382 "uuid": "4d4bee72-e7f2-5b43-8cbc-e43de7aca254", 00:10:10.382 "is_configured": true, 00:10:10.382 "data_offset": 2048, 00:10:10.382 "data_size": 63488 00:10:10.382 }, 00:10:10.382 { 00:10:10.382 "name": "BaseBdev2", 00:10:10.382 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:10.382 "is_configured": true, 00:10:10.382 "data_offset": 2048, 00:10:10.382 "data_size": 63488 00:10:10.382 } 00:10:10.382 ] 00:10:10.382 }' 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.382 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:10.382 "name": "raid_bdev1", 00:10:10.382 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:10.382 "strip_size_kb": 0, 00:10:10.382 "state": "online", 00:10:10.382 "raid_level": "raid1", 00:10:10.382 "superblock": true, 00:10:10.382 "num_base_bdevs": 2, 00:10:10.382 "num_base_bdevs_discovered": 2, 00:10:10.382 "num_base_bdevs_operational": 2, 00:10:10.382 "base_bdevs_list": [ 00:10:10.382 { 00:10:10.382 "name": "spare", 00:10:10.383 "uuid": "4d4bee72-e7f2-5b43-8cbc-e43de7aca254", 00:10:10.383 "is_configured": true, 00:10:10.383 "data_offset": 2048, 00:10:10.383 "data_size": 63488 00:10:10.383 }, 00:10:10.383 { 00:10:10.383 "name": "BaseBdev2", 00:10:10.383 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:10.383 "is_configured": true, 00:10:10.383 "data_offset": 2048, 00:10:10.383 "data_size": 63488 00:10:10.383 } 00:10:10.383 ] 00:10:10.383 }' 00:10:10.383 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:10.383 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:10.383 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.640 "name": "raid_bdev1", 00:10:10.640 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:10.640 "strip_size_kb": 0, 00:10:10.640 "state": "online", 00:10:10.640 "raid_level": "raid1", 00:10:10.640 "superblock": true, 00:10:10.640 "num_base_bdevs": 2, 00:10:10.640 "num_base_bdevs_discovered": 2, 00:10:10.640 "num_base_bdevs_operational": 2, 00:10:10.640 "base_bdevs_list": [ 00:10:10.640 { 00:10:10.640 "name": "spare", 00:10:10.640 "uuid": "4d4bee72-e7f2-5b43-8cbc-e43de7aca254", 00:10:10.640 "is_configured": true, 00:10:10.640 "data_offset": 2048, 00:10:10.640 "data_size": 63488 00:10:10.640 }, 00:10:10.640 { 00:10:10.640 "name": "BaseBdev2", 00:10:10.640 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:10.640 "is_configured": true, 00:10:10.640 "data_offset": 2048, 00:10:10.640 "data_size": 63488 00:10:10.640 } 00:10:10.640 ] 00:10:10.640 }' 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.640 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:10.898 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:10.898 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.898 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:10.898 [2024-11-26 19:51:01.684375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.898 [2024-11-26 19:51:01.684493] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.898 89.14 IOPS, 267.43 MiB/s 00:10:10.898 Latency(us) 00:10:10.898 [2024-11-26T19:51:01.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.898 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:10:10.898 raid_bdev1 : 7.03 89.11 267.32 0.00 0.00 14012.57 231.58 114536.76 00:10:10.898 [2024-11-26T19:51:01.835Z] =================================================================================================================== 00:10:10.898 [2024-11-26T19:51:01.835Z] Total : 89.11 267.32 0.00 0.00 14012.57 231.58 114536.76 00:10:10.898 { 00:10:10.898 "results": [ 00:10:10.898 { 00:10:10.898 "job": "raid_bdev1", 00:10:10.898 "core_mask": "0x1", 00:10:10.898 "workload": "randrw", 00:10:10.898 "percentage": 50, 00:10:10.898 "status": "finished", 00:10:10.898 "queue_depth": 2, 00:10:10.898 "io_size": 3145728, 00:10:10.898 "runtime": 7.025374, 00:10:10.898 "iops": 89.10557644333241, 00:10:10.898 "mibps": 267.3167293299972, 00:10:10.898 "io_failed": 0, 00:10:10.898 "io_timeout": 0, 00:10:10.898 "avg_latency_us": 14012.57177684935, 00:10:10.898 "min_latency_us": 231.58153846153846, 00:10:10.898 "max_latency_us": 114536.76307692307 00:10:10.898 } 00:10:10.898 ], 00:10:10.898 "core_count": 1 00:10:10.898 } 00:10:10.898 [2024-11-26 19:51:01.797629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.898 [2024-11-26 19:51:01.797687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.898 [2024-11-26 19:51:01.797767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.898 [2024-11-26 19:51:01.797777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:10.898 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.898 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.898 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:10:10.898 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.898 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:10.898 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.157 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:11.157 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:11.157 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:10:11.157 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:10:11.157 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:11.157 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:10:11.157 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:11.157 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:11.157 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:11.157 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:10:11.157 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:11.157 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:11.157 19:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:10:11.157 /dev/nbd0 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:11.157 1+0 records in 00:10:11.157 1+0 records out 00:10:11.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031515 s, 13.0 MB/s 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:11.157 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:10:11.415 /dev/nbd1 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:11.415 1+0 records in 00:10:11.415 1+0 records out 00:10:11.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269142 s, 15.2 MB/s 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:11.415 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:10:11.672 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:10:11.672 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:11.672 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:10:11.672 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:11.672 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:10:11.672 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.672 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:11.930 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:12.187 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:12.187 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:12.187 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:12.187 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:12.187 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:12.187 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:10:12.187 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:10:12.187 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:10:12.187 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:10:12.187 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:12.188 [2024-11-26 19:51:02.880830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:12.188 [2024-11-26 19:51:02.880885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.188 [2024-11-26 19:51:02.880908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:12.188 [2024-11-26 19:51:02.880917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.188 [2024-11-26 19:51:02.882977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.188 [2024-11-26 19:51:02.883009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:12.188 [2024-11-26 19:51:02.883097] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:12.188 [2024-11-26 19:51:02.883146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:12.188 [2024-11-26 19:51:02.883268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.188 spare 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:12.188 [2024-11-26 19:51:02.983369] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:12.188 [2024-11-26 19:51:02.983421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:12.188 [2024-11-26 19:51:02.983746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:10:12.188 [2024-11-26 19:51:02.983921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:12.188 [2024-11-26 19:51:02.983937] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:12.188 [2024-11-26 19:51:02.984105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.188 19:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:12.188 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.188 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.188 "name": "raid_bdev1", 00:10:12.188 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:12.188 "strip_size_kb": 0, 00:10:12.188 "state": "online", 00:10:12.188 "raid_level": "raid1", 00:10:12.188 "superblock": true, 00:10:12.188 "num_base_bdevs": 2, 00:10:12.188 "num_base_bdevs_discovered": 2, 00:10:12.188 "num_base_bdevs_operational": 2, 00:10:12.188 "base_bdevs_list": [ 00:10:12.188 { 00:10:12.188 "name": "spare", 00:10:12.188 "uuid": "4d4bee72-e7f2-5b43-8cbc-e43de7aca254", 00:10:12.188 "is_configured": true, 00:10:12.188 "data_offset": 2048, 00:10:12.188 "data_size": 63488 00:10:12.188 }, 00:10:12.188 { 00:10:12.188 "name": "BaseBdev2", 00:10:12.188 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:12.188 "is_configured": true, 00:10:12.188 "data_offset": 2048, 00:10:12.188 "data_size": 63488 00:10:12.188 } 00:10:12.188 ] 00:10:12.188 }' 00:10:12.188 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.188 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:12.446 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:12.446 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:12.446 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:12.446 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:12.446 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:12.446 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.446 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.446 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.446 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:12.446 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.446 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:12.446 "name": "raid_bdev1", 00:10:12.446 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:12.446 "strip_size_kb": 0, 00:10:12.446 "state": "online", 00:10:12.446 "raid_level": "raid1", 00:10:12.446 "superblock": true, 00:10:12.446 "num_base_bdevs": 2, 00:10:12.446 "num_base_bdevs_discovered": 2, 00:10:12.446 "num_base_bdevs_operational": 2, 00:10:12.446 "base_bdevs_list": [ 00:10:12.446 { 00:10:12.446 "name": "spare", 00:10:12.446 "uuid": "4d4bee72-e7f2-5b43-8cbc-e43de7aca254", 00:10:12.446 "is_configured": true, 00:10:12.446 "data_offset": 2048, 00:10:12.446 "data_size": 63488 00:10:12.446 }, 00:10:12.446 { 00:10:12.446 "name": "BaseBdev2", 00:10:12.446 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:12.446 "is_configured": true, 00:10:12.446 "data_offset": 2048, 00:10:12.446 "data_size": 63488 00:10:12.446 } 00:10:12.446 ] 00:10:12.446 }' 00:10:12.446 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:12.446 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:12.446 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:12.703 [2024-11-26 19:51:03.421025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.703 "name": "raid_bdev1", 00:10:12.703 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:12.703 "strip_size_kb": 0, 00:10:12.703 "state": "online", 00:10:12.703 "raid_level": "raid1", 00:10:12.703 "superblock": true, 00:10:12.703 "num_base_bdevs": 2, 00:10:12.703 "num_base_bdevs_discovered": 1, 00:10:12.703 "num_base_bdevs_operational": 1, 00:10:12.703 "base_bdevs_list": [ 00:10:12.703 { 00:10:12.703 "name": null, 00:10:12.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.703 "is_configured": false, 00:10:12.703 "data_offset": 0, 00:10:12.703 "data_size": 63488 00:10:12.703 }, 00:10:12.703 { 00:10:12.703 "name": "BaseBdev2", 00:10:12.703 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:12.703 "is_configured": true, 00:10:12.703 "data_offset": 2048, 00:10:12.703 "data_size": 63488 00:10:12.703 } 00:10:12.703 ] 00:10:12.703 }' 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.703 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:12.960 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:12.960 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.960 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:12.960 [2024-11-26 19:51:03.745138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:12.960 [2024-11-26 19:51:03.745325] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:12.960 [2024-11-26 19:51:03.745357] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:12.960 [2024-11-26 19:51:03.745389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:12.960 [2024-11-26 19:51:03.755193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:10:12.960 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.960 19:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:10:12.960 [2024-11-26 19:51:03.756876] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:13.918 "name": "raid_bdev1", 00:10:13.918 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:13.918 "strip_size_kb": 0, 00:10:13.918 "state": "online", 00:10:13.918 "raid_level": "raid1", 00:10:13.918 "superblock": true, 00:10:13.918 "num_base_bdevs": 2, 00:10:13.918 "num_base_bdevs_discovered": 2, 00:10:13.918 "num_base_bdevs_operational": 2, 00:10:13.918 "process": { 00:10:13.918 "type": "rebuild", 00:10:13.918 "target": "spare", 00:10:13.918 "progress": { 00:10:13.918 "blocks": 20480, 00:10:13.918 "percent": 32 00:10:13.918 } 00:10:13.918 }, 00:10:13.918 "base_bdevs_list": [ 00:10:13.918 { 00:10:13.918 "name": "spare", 00:10:13.918 "uuid": "4d4bee72-e7f2-5b43-8cbc-e43de7aca254", 00:10:13.918 "is_configured": true, 00:10:13.918 "data_offset": 2048, 00:10:13.918 "data_size": 63488 00:10:13.918 }, 00:10:13.918 { 00:10:13.918 "name": "BaseBdev2", 00:10:13.918 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:13.918 "is_configured": true, 00:10:13.918 "data_offset": 2048, 00:10:13.918 "data_size": 63488 00:10:13.918 } 00:10:13.918 ] 00:10:13.918 }' 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.918 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:14.174 [2024-11-26 19:51:04.855222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:14.174 [2024-11-26 19:51:04.863120] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:14.174 [2024-11-26 19:51:04.863177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.174 [2024-11-26 19:51:04.863190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:14.174 [2024-11-26 19:51:04.863198] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.174 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.174 "name": "raid_bdev1", 00:10:14.174 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:14.174 "strip_size_kb": 0, 00:10:14.174 "state": "online", 00:10:14.174 "raid_level": "raid1", 00:10:14.175 "superblock": true, 00:10:14.175 "num_base_bdevs": 2, 00:10:14.175 "num_base_bdevs_discovered": 1, 00:10:14.175 "num_base_bdevs_operational": 1, 00:10:14.175 "base_bdevs_list": [ 00:10:14.175 { 00:10:14.175 "name": null, 00:10:14.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.175 "is_configured": false, 00:10:14.175 "data_offset": 0, 00:10:14.175 "data_size": 63488 00:10:14.175 }, 00:10:14.175 { 00:10:14.175 "name": "BaseBdev2", 00:10:14.175 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:14.175 "is_configured": true, 00:10:14.175 "data_offset": 2048, 00:10:14.175 "data_size": 63488 00:10:14.175 } 00:10:14.175 ] 00:10:14.175 }' 00:10:14.175 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.175 19:51:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:14.432 19:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:14.432 19:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.432 19:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:14.432 [2024-11-26 19:51:05.208242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:14.432 [2024-11-26 19:51:05.208316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.432 [2024-11-26 19:51:05.208337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:14.432 [2024-11-26 19:51:05.208355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.432 [2024-11-26 19:51:05.208790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.432 [2024-11-26 19:51:05.208813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:14.432 [2024-11-26 19:51:05.208900] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:14.432 [2024-11-26 19:51:05.208913] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:14.432 [2024-11-26 19:51:05.208922] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:14.432 [2024-11-26 19:51:05.208943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:14.432 [2024-11-26 19:51:05.218587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:10:14.432 spare 00:10:14.432 19:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.432 19:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:10:14.432 [2024-11-26 19:51:05.220304] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:15.365 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:15.365 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:15.365 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:15.365 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:15.365 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:15.365 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.365 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.365 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.365 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:15.365 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.365 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:15.365 "name": "raid_bdev1", 00:10:15.365 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:15.365 "strip_size_kb": 0, 00:10:15.365 "state": "online", 00:10:15.365 "raid_level": "raid1", 00:10:15.365 "superblock": true, 00:10:15.365 "num_base_bdevs": 2, 00:10:15.365 "num_base_bdevs_discovered": 2, 00:10:15.365 "num_base_bdevs_operational": 2, 00:10:15.365 "process": { 00:10:15.365 "type": "rebuild", 00:10:15.365 "target": "spare", 00:10:15.365 "progress": { 00:10:15.365 "blocks": 20480, 00:10:15.365 "percent": 32 00:10:15.365 } 00:10:15.365 }, 00:10:15.365 "base_bdevs_list": [ 00:10:15.365 { 00:10:15.365 "name": "spare", 00:10:15.365 "uuid": "4d4bee72-e7f2-5b43-8cbc-e43de7aca254", 00:10:15.365 "is_configured": true, 00:10:15.365 "data_offset": 2048, 00:10:15.365 "data_size": 63488 00:10:15.365 }, 00:10:15.365 { 00:10:15.365 "name": "BaseBdev2", 00:10:15.365 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:15.365 "is_configured": true, 00:10:15.365 "data_offset": 2048, 00:10:15.365 "data_size": 63488 00:10:15.365 } 00:10:15.365 ] 00:10:15.365 }' 00:10:15.365 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:15.623 [2024-11-26 19:51:06.342845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:15.623 [2024-11-26 19:51:06.427366] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:15.623 [2024-11-26 19:51:06.427444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.623 [2024-11-26 19:51:06.427459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:15.623 [2024-11-26 19:51:06.427465] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.623 "name": "raid_bdev1", 00:10:15.623 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:15.623 "strip_size_kb": 0, 00:10:15.623 "state": "online", 00:10:15.623 "raid_level": "raid1", 00:10:15.623 "superblock": true, 00:10:15.623 "num_base_bdevs": 2, 00:10:15.623 "num_base_bdevs_discovered": 1, 00:10:15.623 "num_base_bdevs_operational": 1, 00:10:15.623 "base_bdevs_list": [ 00:10:15.623 { 00:10:15.623 "name": null, 00:10:15.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.623 "is_configured": false, 00:10:15.623 "data_offset": 0, 00:10:15.623 "data_size": 63488 00:10:15.623 }, 00:10:15.623 { 00:10:15.623 "name": "BaseBdev2", 00:10:15.623 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:15.623 "is_configured": true, 00:10:15.623 "data_offset": 2048, 00:10:15.623 "data_size": 63488 00:10:15.623 } 00:10:15.623 ] 00:10:15.623 }' 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.623 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:15.880 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:15.880 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:15.880 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:15.880 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:15.880 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:15.880 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.880 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.880 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.880 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:15.880 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.880 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:15.880 "name": "raid_bdev1", 00:10:15.880 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:15.880 "strip_size_kb": 0, 00:10:15.880 "state": "online", 00:10:15.880 "raid_level": "raid1", 00:10:15.880 "superblock": true, 00:10:15.880 "num_base_bdevs": 2, 00:10:15.880 "num_base_bdevs_discovered": 1, 00:10:15.880 "num_base_bdevs_operational": 1, 00:10:15.880 "base_bdevs_list": [ 00:10:15.880 { 00:10:15.880 "name": null, 00:10:15.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.880 "is_configured": false, 00:10:15.880 "data_offset": 0, 00:10:15.880 "data_size": 63488 00:10:15.880 }, 00:10:15.880 { 00:10:15.880 "name": "BaseBdev2", 00:10:15.880 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:15.880 "is_configured": true, 00:10:15.880 "data_offset": 2048, 00:10:15.880 "data_size": 63488 00:10:15.880 } 00:10:15.880 ] 00:10:15.880 }' 00:10:15.880 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:16.138 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:16.138 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:16.138 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:16.138 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:10:16.138 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.138 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:16.138 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.138 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:16.138 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.138 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:16.138 [2024-11-26 19:51:06.880665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:16.138 [2024-11-26 19:51:06.880721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.138 [2024-11-26 19:51:06.880745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:16.138 [2024-11-26 19:51:06.880753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.138 [2024-11-26 19:51:06.881174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.138 [2024-11-26 19:51:06.881192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:16.138 [2024-11-26 19:51:06.881266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:10:16.138 [2024-11-26 19:51:06.881279] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:10:16.138 [2024-11-26 19:51:06.881287] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:16.138 [2024-11-26 19:51:06.881296] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:10:16.138 BaseBdev1 00:10:16.138 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.138 19:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:10:17.071 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:17.071 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.071 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.071 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.071 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.071 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:17.071 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.071 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.071 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.071 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.071 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.071 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.072 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.072 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.072 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.072 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.072 "name": "raid_bdev1", 00:10:17.072 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:17.072 "strip_size_kb": 0, 00:10:17.072 "state": "online", 00:10:17.072 "raid_level": "raid1", 00:10:17.072 "superblock": true, 00:10:17.072 "num_base_bdevs": 2, 00:10:17.072 "num_base_bdevs_discovered": 1, 00:10:17.072 "num_base_bdevs_operational": 1, 00:10:17.072 "base_bdevs_list": [ 00:10:17.072 { 00:10:17.072 "name": null, 00:10:17.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.072 "is_configured": false, 00:10:17.072 "data_offset": 0, 00:10:17.072 "data_size": 63488 00:10:17.072 }, 00:10:17.072 { 00:10:17.072 "name": "BaseBdev2", 00:10:17.072 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:17.072 "is_configured": true, 00:10:17.072 "data_offset": 2048, 00:10:17.072 "data_size": 63488 00:10:17.072 } 00:10:17.072 ] 00:10:17.072 }' 00:10:17.072 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.072 19:51:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.330 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:17.330 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:17.330 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:17.330 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:17.330 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:17.330 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.330 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.330 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.330 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.330 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.330 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:17.330 "name": "raid_bdev1", 00:10:17.330 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:17.330 "strip_size_kb": 0, 00:10:17.330 "state": "online", 00:10:17.331 "raid_level": "raid1", 00:10:17.331 "superblock": true, 00:10:17.331 "num_base_bdevs": 2, 00:10:17.331 "num_base_bdevs_discovered": 1, 00:10:17.331 "num_base_bdevs_operational": 1, 00:10:17.331 "base_bdevs_list": [ 00:10:17.331 { 00:10:17.331 "name": null, 00:10:17.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.331 "is_configured": false, 00:10:17.331 "data_offset": 0, 00:10:17.331 "data_size": 63488 00:10:17.331 }, 00:10:17.331 { 00:10:17.331 "name": "BaseBdev2", 00:10:17.331 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:17.331 "is_configured": true, 00:10:17.331 "data_offset": 2048, 00:10:17.331 "data_size": 63488 00:10:17.331 } 00:10:17.331 ] 00:10:17.331 }' 00:10:17.331 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:17.331 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:17.331 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.589 [2024-11-26 19:51:08.305125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.589 [2024-11-26 19:51:08.305289] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:10:17.589 [2024-11-26 19:51:08.305310] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:17.589 request: 00:10:17.589 { 00:10:17.589 "base_bdev": "BaseBdev1", 00:10:17.589 "raid_bdev": "raid_bdev1", 00:10:17.589 "method": "bdev_raid_add_base_bdev", 00:10:17.589 "req_id": 1 00:10:17.589 } 00:10:17.589 Got JSON-RPC error response 00:10:17.589 response: 00:10:17.589 { 00:10:17.589 "code": -22, 00:10:17.589 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:10:17.589 } 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:17.589 19:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.523 "name": "raid_bdev1", 00:10:18.523 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:18.523 "strip_size_kb": 0, 00:10:18.523 "state": "online", 00:10:18.523 "raid_level": "raid1", 00:10:18.523 "superblock": true, 00:10:18.523 "num_base_bdevs": 2, 00:10:18.523 "num_base_bdevs_discovered": 1, 00:10:18.523 "num_base_bdevs_operational": 1, 00:10:18.523 "base_bdevs_list": [ 00:10:18.523 { 00:10:18.523 "name": null, 00:10:18.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.523 "is_configured": false, 00:10:18.523 "data_offset": 0, 00:10:18.523 "data_size": 63488 00:10:18.523 }, 00:10:18.523 { 00:10:18.523 "name": "BaseBdev2", 00:10:18.523 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:18.523 "is_configured": true, 00:10:18.523 "data_offset": 2048, 00:10:18.523 "data_size": 63488 00:10:18.523 } 00:10:18.523 ] 00:10:18.523 }' 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.523 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:18.790 "name": "raid_bdev1", 00:10:18.790 "uuid": "5ed9b4bb-f7f4-49c8-aad6-f1a0cff494ca", 00:10:18.790 "strip_size_kb": 0, 00:10:18.790 "state": "online", 00:10:18.790 "raid_level": "raid1", 00:10:18.790 "superblock": true, 00:10:18.790 "num_base_bdevs": 2, 00:10:18.790 "num_base_bdevs_discovered": 1, 00:10:18.790 "num_base_bdevs_operational": 1, 00:10:18.790 "base_bdevs_list": [ 00:10:18.790 { 00:10:18.790 "name": null, 00:10:18.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.790 "is_configured": false, 00:10:18.790 "data_offset": 0, 00:10:18.790 "data_size": 63488 00:10:18.790 }, 00:10:18.790 { 00:10:18.790 "name": "BaseBdev2", 00:10:18.790 "uuid": "ed878db1-4ef2-56bd-b42c-ca0a11526866", 00:10:18.790 "is_configured": true, 00:10:18.790 "data_offset": 2048, 00:10:18.790 "data_size": 63488 00:10:18.790 } 00:10:18.790 ] 00:10:18.790 }' 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 74754 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 74754 ']' 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 74754 00:10:18.790 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:10:19.048 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.048 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74754 00:10:19.048 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.048 killing process with pid 74754 00:10:19.048 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.048 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74754' 00:10:19.048 Received shutdown signal, test time was about 14.989585 seconds 00:10:19.048 00:10:19.048 Latency(us) 00:10:19.048 [2024-11-26T19:51:09.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.048 [2024-11-26T19:51:09.985Z] =================================================================================================================== 00:10:19.048 [2024-11-26T19:51:09.985Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:19.048 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 74754 00:10:19.048 19:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 74754 00:10:19.048 [2024-11-26 19:51:09.748548] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.048 [2024-11-26 19:51:09.748675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.048 [2024-11-26 19:51:09.748726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.048 [2024-11-26 19:51:09.748737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:19.048 [2024-11-26 19:51:09.867998] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.614 19:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:10:19.614 00:10:19.614 real 0m17.283s 00:10:19.614 user 0m21.892s 00:10:19.614 sys 0m1.607s 00:10:19.614 19:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.614 ************************************ 00:10:19.614 END TEST raid_rebuild_test_sb_io 00:10:19.614 19:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:19.614 ************************************ 00:10:19.614 19:51:10 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:10:19.614 19:51:10 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:10:19.614 19:51:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:19.614 19:51:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.614 19:51:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.614 ************************************ 00:10:19.614 START TEST raid_rebuild_test 00:10:19.614 ************************************ 00:10:19.614 19:51:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:10:19.614 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:19.614 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:10:19.614 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:19.614 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:19.614 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:19.614 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:19.614 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:19.614 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:19.614 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75415 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75415 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75415 ']' 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.872 19:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:19.872 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:19.872 Zero copy mechanism will not be used. 00:10:19.872 [2024-11-26 19:51:10.613398] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:10:19.872 [2024-11-26 19:51:10.613510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75415 ] 00:10:19.872 [2024-11-26 19:51:10.765687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.131 [2024-11-26 19:51:10.867946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.131 [2024-11-26 19:51:10.987657] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.131 [2024-11-26 19:51:10.987711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.696 BaseBdev1_malloc 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.696 [2024-11-26 19:51:11.494225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:20.696 [2024-11-26 19:51:11.494286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.696 [2024-11-26 19:51:11.494305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:20.696 [2024-11-26 19:51:11.494316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.696 [2024-11-26 19:51:11.496213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.696 [2024-11-26 19:51:11.496244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:20.696 BaseBdev1 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.696 BaseBdev2_malloc 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.696 [2024-11-26 19:51:11.527837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:20.696 [2024-11-26 19:51:11.527889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.696 [2024-11-26 19:51:11.527909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:20.696 [2024-11-26 19:51:11.527919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.696 [2024-11-26 19:51:11.529772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.696 [2024-11-26 19:51:11.529801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:20.696 BaseBdev2 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.696 BaseBdev3_malloc 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.696 [2024-11-26 19:51:11.578509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:10:20.696 [2024-11-26 19:51:11.578567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.696 [2024-11-26 19:51:11.578589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:20.696 [2024-11-26 19:51:11.578600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.696 [2024-11-26 19:51:11.580489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.696 [2024-11-26 19:51:11.580520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:20.696 BaseBdev3 00:10:20.696 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.697 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:20.697 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:20.697 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.697 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.697 BaseBdev4_malloc 00:10:20.697 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.697 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:10:20.697 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.697 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.697 [2024-11-26 19:51:11.612174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:10:20.697 [2024-11-26 19:51:11.612222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.697 [2024-11-26 19:51:11.612239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:20.697 [2024-11-26 19:51:11.612249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.697 [2024-11-26 19:51:11.614065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.697 [2024-11-26 19:51:11.614096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:20.697 BaseBdev4 00:10:20.697 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.697 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:20.697 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.697 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.955 spare_malloc 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.955 spare_delay 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.955 [2024-11-26 19:51:11.653575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:20.955 [2024-11-26 19:51:11.653619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.955 [2024-11-26 19:51:11.653636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:20.955 [2024-11-26 19:51:11.653646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.955 [2024-11-26 19:51:11.655512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.955 [2024-11-26 19:51:11.655538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:20.955 spare 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.955 [2024-11-26 19:51:11.661618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.955 [2024-11-26 19:51:11.663219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.955 [2024-11-26 19:51:11.663272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.955 [2024-11-26 19:51:11.663315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:20.955 [2024-11-26 19:51:11.663392] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:20.955 [2024-11-26 19:51:11.663405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:20.955 [2024-11-26 19:51:11.663621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:20.955 [2024-11-26 19:51:11.663756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:20.955 [2024-11-26 19:51:11.663771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:20.955 [2024-11-26 19:51:11.663886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.955 "name": "raid_bdev1", 00:10:20.955 "uuid": "aabb0a16-26f9-46ae-9f4a-047723ca0087", 00:10:20.955 "strip_size_kb": 0, 00:10:20.955 "state": "online", 00:10:20.955 "raid_level": "raid1", 00:10:20.955 "superblock": false, 00:10:20.955 "num_base_bdevs": 4, 00:10:20.955 "num_base_bdevs_discovered": 4, 00:10:20.955 "num_base_bdevs_operational": 4, 00:10:20.955 "base_bdevs_list": [ 00:10:20.955 { 00:10:20.955 "name": "BaseBdev1", 00:10:20.955 "uuid": "4e481978-71fe-5548-9e58-07fb4f128cc8", 00:10:20.955 "is_configured": true, 00:10:20.955 "data_offset": 0, 00:10:20.955 "data_size": 65536 00:10:20.955 }, 00:10:20.955 { 00:10:20.955 "name": "BaseBdev2", 00:10:20.955 "uuid": "24e63e3d-3404-5b0a-be48-888a6fc5999b", 00:10:20.955 "is_configured": true, 00:10:20.955 "data_offset": 0, 00:10:20.955 "data_size": 65536 00:10:20.955 }, 00:10:20.955 { 00:10:20.955 "name": "BaseBdev3", 00:10:20.955 "uuid": "dfe9f412-be52-574f-80fa-fe66e817d5fc", 00:10:20.955 "is_configured": true, 00:10:20.955 "data_offset": 0, 00:10:20.955 "data_size": 65536 00:10:20.955 }, 00:10:20.955 { 00:10:20.955 "name": "BaseBdev4", 00:10:20.955 "uuid": "8a623b96-6675-52bf-9356-b580547d0698", 00:10:20.955 "is_configured": true, 00:10:20.955 "data_offset": 0, 00:10:20.955 "data_size": 65536 00:10:20.955 } 00:10:20.955 ] 00:10:20.955 }' 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.955 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.214 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.214 19:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:21.214 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.214 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.214 [2024-11-26 19:51:11.978006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.214 19:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:21.214 19:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:21.471 [2024-11-26 19:51:12.225775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:10:21.471 /dev/nbd0 00:10:21.471 19:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:21.471 19:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:21.471 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:21.471 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:10:21.471 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:21.471 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:21.471 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:21.471 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:10:21.471 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:21.471 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:21.472 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:21.472 1+0 records in 00:10:21.472 1+0 records out 00:10:21.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197313 s, 20.8 MB/s 00:10:21.472 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:21.472 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:10:21.472 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:21.472 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:21.472 19:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:10:21.472 19:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:21.472 19:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:21.472 19:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:21.472 19:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:21.472 19:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:10:28.040 65536+0 records in 00:10:28.040 65536+0 records out 00:10:28.040 33554432 bytes (34 MB, 32 MiB) copied, 5.67443 s, 5.9 MB/s 00:10:28.040 19:51:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:28.040 19:51:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:28.040 19:51:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:28.040 19:51:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:28.040 19:51:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:28.040 19:51:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.040 19:51:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:28.040 [2024-11-26 19:51:18.167789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.040 19:51:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:28.040 19:51:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:28.040 19:51:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:28.040 19:51:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:28.040 19:51:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:28.040 19:51:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:28.040 19:51:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:28.040 19:51:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:28.040 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.041 [2024-11-26 19:51:18.202365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.041 "name": "raid_bdev1", 00:10:28.041 "uuid": "aabb0a16-26f9-46ae-9f4a-047723ca0087", 00:10:28.041 "strip_size_kb": 0, 00:10:28.041 "state": "online", 00:10:28.041 "raid_level": "raid1", 00:10:28.041 "superblock": false, 00:10:28.041 "num_base_bdevs": 4, 00:10:28.041 "num_base_bdevs_discovered": 3, 00:10:28.041 "num_base_bdevs_operational": 3, 00:10:28.041 "base_bdevs_list": [ 00:10:28.041 { 00:10:28.041 "name": null, 00:10:28.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.041 "is_configured": false, 00:10:28.041 "data_offset": 0, 00:10:28.041 "data_size": 65536 00:10:28.041 }, 00:10:28.041 { 00:10:28.041 "name": "BaseBdev2", 00:10:28.041 "uuid": "24e63e3d-3404-5b0a-be48-888a6fc5999b", 00:10:28.041 "is_configured": true, 00:10:28.041 "data_offset": 0, 00:10:28.041 "data_size": 65536 00:10:28.041 }, 00:10:28.041 { 00:10:28.041 "name": "BaseBdev3", 00:10:28.041 "uuid": "dfe9f412-be52-574f-80fa-fe66e817d5fc", 00:10:28.041 "is_configured": true, 00:10:28.041 "data_offset": 0, 00:10:28.041 "data_size": 65536 00:10:28.041 }, 00:10:28.041 { 00:10:28.041 "name": "BaseBdev4", 00:10:28.041 "uuid": "8a623b96-6675-52bf-9356-b580547d0698", 00:10:28.041 "is_configured": true, 00:10:28.041 "data_offset": 0, 00:10:28.041 "data_size": 65536 00:10:28.041 } 00:10:28.041 ] 00:10:28.041 }' 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.041 [2024-11-26 19:51:18.502409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:28.041 [2024-11-26 19:51:18.510856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.041 19:51:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:28.041 [2024-11-26 19:51:18.512553] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:28.605 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:28.605 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:28.605 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:28.605 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:28.605 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:28.605 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.605 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.605 19:51:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.605 19:51:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.605 19:51:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:28.863 "name": "raid_bdev1", 00:10:28.863 "uuid": "aabb0a16-26f9-46ae-9f4a-047723ca0087", 00:10:28.863 "strip_size_kb": 0, 00:10:28.863 "state": "online", 00:10:28.863 "raid_level": "raid1", 00:10:28.863 "superblock": false, 00:10:28.863 "num_base_bdevs": 4, 00:10:28.863 "num_base_bdevs_discovered": 4, 00:10:28.863 "num_base_bdevs_operational": 4, 00:10:28.863 "process": { 00:10:28.863 "type": "rebuild", 00:10:28.863 "target": "spare", 00:10:28.863 "progress": { 00:10:28.863 "blocks": 20480, 00:10:28.863 "percent": 31 00:10:28.863 } 00:10:28.863 }, 00:10:28.863 "base_bdevs_list": [ 00:10:28.863 { 00:10:28.863 "name": "spare", 00:10:28.863 "uuid": "9090da1f-9983-591e-b053-fb692126c359", 00:10:28.863 "is_configured": true, 00:10:28.863 "data_offset": 0, 00:10:28.863 "data_size": 65536 00:10:28.863 }, 00:10:28.863 { 00:10:28.863 "name": "BaseBdev2", 00:10:28.863 "uuid": "24e63e3d-3404-5b0a-be48-888a6fc5999b", 00:10:28.863 "is_configured": true, 00:10:28.863 "data_offset": 0, 00:10:28.863 "data_size": 65536 00:10:28.863 }, 00:10:28.863 { 00:10:28.863 "name": "BaseBdev3", 00:10:28.863 "uuid": "dfe9f412-be52-574f-80fa-fe66e817d5fc", 00:10:28.863 "is_configured": true, 00:10:28.863 "data_offset": 0, 00:10:28.863 "data_size": 65536 00:10:28.863 }, 00:10:28.863 { 00:10:28.863 "name": "BaseBdev4", 00:10:28.863 "uuid": "8a623b96-6675-52bf-9356-b580547d0698", 00:10:28.863 "is_configured": true, 00:10:28.863 "data_offset": 0, 00:10:28.863 "data_size": 65536 00:10:28.863 } 00:10:28.863 ] 00:10:28.863 }' 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.863 [2024-11-26 19:51:19.610434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:28.863 [2024-11-26 19:51:19.618916] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:28.863 [2024-11-26 19:51:19.618985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.863 [2024-11-26 19:51:19.619001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:28.863 [2024-11-26 19:51:19.619009] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.863 "name": "raid_bdev1", 00:10:28.863 "uuid": "aabb0a16-26f9-46ae-9f4a-047723ca0087", 00:10:28.863 "strip_size_kb": 0, 00:10:28.863 "state": "online", 00:10:28.863 "raid_level": "raid1", 00:10:28.863 "superblock": false, 00:10:28.863 "num_base_bdevs": 4, 00:10:28.863 "num_base_bdevs_discovered": 3, 00:10:28.863 "num_base_bdevs_operational": 3, 00:10:28.863 "base_bdevs_list": [ 00:10:28.863 { 00:10:28.863 "name": null, 00:10:28.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.863 "is_configured": false, 00:10:28.863 "data_offset": 0, 00:10:28.863 "data_size": 65536 00:10:28.863 }, 00:10:28.863 { 00:10:28.863 "name": "BaseBdev2", 00:10:28.863 "uuid": "24e63e3d-3404-5b0a-be48-888a6fc5999b", 00:10:28.863 "is_configured": true, 00:10:28.863 "data_offset": 0, 00:10:28.863 "data_size": 65536 00:10:28.863 }, 00:10:28.863 { 00:10:28.863 "name": "BaseBdev3", 00:10:28.863 "uuid": "dfe9f412-be52-574f-80fa-fe66e817d5fc", 00:10:28.863 "is_configured": true, 00:10:28.863 "data_offset": 0, 00:10:28.863 "data_size": 65536 00:10:28.863 }, 00:10:28.863 { 00:10:28.863 "name": "BaseBdev4", 00:10:28.863 "uuid": "8a623b96-6675-52bf-9356-b580547d0698", 00:10:28.863 "is_configured": true, 00:10:28.863 "data_offset": 0, 00:10:28.863 "data_size": 65536 00:10:28.863 } 00:10:28.863 ] 00:10:28.863 }' 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.863 19:51:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.121 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:29.121 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:29.121 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:29.121 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:29.121 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:29.121 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.121 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.121 19:51:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.121 19:51:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.121 19:51:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.121 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:29.121 "name": "raid_bdev1", 00:10:29.121 "uuid": "aabb0a16-26f9-46ae-9f4a-047723ca0087", 00:10:29.121 "strip_size_kb": 0, 00:10:29.121 "state": "online", 00:10:29.121 "raid_level": "raid1", 00:10:29.121 "superblock": false, 00:10:29.121 "num_base_bdevs": 4, 00:10:29.121 "num_base_bdevs_discovered": 3, 00:10:29.121 "num_base_bdevs_operational": 3, 00:10:29.121 "base_bdevs_list": [ 00:10:29.121 { 00:10:29.121 "name": null, 00:10:29.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.121 "is_configured": false, 00:10:29.121 "data_offset": 0, 00:10:29.121 "data_size": 65536 00:10:29.121 }, 00:10:29.121 { 00:10:29.121 "name": "BaseBdev2", 00:10:29.121 "uuid": "24e63e3d-3404-5b0a-be48-888a6fc5999b", 00:10:29.121 "is_configured": true, 00:10:29.121 "data_offset": 0, 00:10:29.121 "data_size": 65536 00:10:29.121 }, 00:10:29.121 { 00:10:29.121 "name": "BaseBdev3", 00:10:29.121 "uuid": "dfe9f412-be52-574f-80fa-fe66e817d5fc", 00:10:29.121 "is_configured": true, 00:10:29.121 "data_offset": 0, 00:10:29.121 "data_size": 65536 00:10:29.121 }, 00:10:29.121 { 00:10:29.121 "name": "BaseBdev4", 00:10:29.121 "uuid": "8a623b96-6675-52bf-9356-b580547d0698", 00:10:29.121 "is_configured": true, 00:10:29.121 "data_offset": 0, 00:10:29.121 "data_size": 65536 00:10:29.121 } 00:10:29.121 ] 00:10:29.121 }' 00:10:29.121 19:51:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:29.121 19:51:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:29.121 19:51:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:29.121 19:51:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:29.121 19:51:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:29.121 19:51:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.121 19:51:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.121 [2024-11-26 19:51:20.047727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:29.121 [2024-11-26 19:51:20.055367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:10:29.121 19:51:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.121 19:51:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:29.378 [2024-11-26 19:51:20.057091] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:30.310 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:30.310 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:30.310 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:30.310 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:30.310 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:30.310 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.310 19:51:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.310 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.310 19:51:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.310 19:51:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.310 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:30.310 "name": "raid_bdev1", 00:10:30.310 "uuid": "aabb0a16-26f9-46ae-9f4a-047723ca0087", 00:10:30.310 "strip_size_kb": 0, 00:10:30.310 "state": "online", 00:10:30.310 "raid_level": "raid1", 00:10:30.310 "superblock": false, 00:10:30.311 "num_base_bdevs": 4, 00:10:30.311 "num_base_bdevs_discovered": 4, 00:10:30.311 "num_base_bdevs_operational": 4, 00:10:30.311 "process": { 00:10:30.311 "type": "rebuild", 00:10:30.311 "target": "spare", 00:10:30.311 "progress": { 00:10:30.311 "blocks": 20480, 00:10:30.311 "percent": 31 00:10:30.311 } 00:10:30.311 }, 00:10:30.311 "base_bdevs_list": [ 00:10:30.311 { 00:10:30.311 "name": "spare", 00:10:30.311 "uuid": "9090da1f-9983-591e-b053-fb692126c359", 00:10:30.311 "is_configured": true, 00:10:30.311 "data_offset": 0, 00:10:30.311 "data_size": 65536 00:10:30.311 }, 00:10:30.311 { 00:10:30.311 "name": "BaseBdev2", 00:10:30.311 "uuid": "24e63e3d-3404-5b0a-be48-888a6fc5999b", 00:10:30.311 "is_configured": true, 00:10:30.311 "data_offset": 0, 00:10:30.311 "data_size": 65536 00:10:30.311 }, 00:10:30.311 { 00:10:30.311 "name": "BaseBdev3", 00:10:30.311 "uuid": "dfe9f412-be52-574f-80fa-fe66e817d5fc", 00:10:30.311 "is_configured": true, 00:10:30.311 "data_offset": 0, 00:10:30.311 "data_size": 65536 00:10:30.311 }, 00:10:30.311 { 00:10:30.311 "name": "BaseBdev4", 00:10:30.311 "uuid": "8a623b96-6675-52bf-9356-b580547d0698", 00:10:30.311 "is_configured": true, 00:10:30.311 "data_offset": 0, 00:10:30.311 "data_size": 65536 00:10:30.311 } 00:10:30.311 ] 00:10:30.311 }' 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.311 [2024-11-26 19:51:21.154959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.311 [2024-11-26 19:51:21.163415] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.311 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:30.311 "name": "raid_bdev1", 00:10:30.311 "uuid": "aabb0a16-26f9-46ae-9f4a-047723ca0087", 00:10:30.311 "strip_size_kb": 0, 00:10:30.311 "state": "online", 00:10:30.311 "raid_level": "raid1", 00:10:30.311 "superblock": false, 00:10:30.311 "num_base_bdevs": 4, 00:10:30.311 "num_base_bdevs_discovered": 3, 00:10:30.311 "num_base_bdevs_operational": 3, 00:10:30.311 "process": { 00:10:30.311 "type": "rebuild", 00:10:30.311 "target": "spare", 00:10:30.311 "progress": { 00:10:30.311 "blocks": 22528, 00:10:30.311 "percent": 34 00:10:30.311 } 00:10:30.311 }, 00:10:30.311 "base_bdevs_list": [ 00:10:30.311 { 00:10:30.311 "name": "spare", 00:10:30.311 "uuid": "9090da1f-9983-591e-b053-fb692126c359", 00:10:30.311 "is_configured": true, 00:10:30.311 "data_offset": 0, 00:10:30.311 "data_size": 65536 00:10:30.311 }, 00:10:30.311 { 00:10:30.311 "name": null, 00:10:30.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.311 "is_configured": false, 00:10:30.311 "data_offset": 0, 00:10:30.311 "data_size": 65536 00:10:30.311 }, 00:10:30.311 { 00:10:30.311 "name": "BaseBdev3", 00:10:30.311 "uuid": "dfe9f412-be52-574f-80fa-fe66e817d5fc", 00:10:30.311 "is_configured": true, 00:10:30.311 "data_offset": 0, 00:10:30.311 "data_size": 65536 00:10:30.311 }, 00:10:30.311 { 00:10:30.311 "name": "BaseBdev4", 00:10:30.312 "uuid": "8a623b96-6675-52bf-9356-b580547d0698", 00:10:30.312 "is_configured": true, 00:10:30.312 "data_offset": 0, 00:10:30.312 "data_size": 65536 00:10:30.312 } 00:10:30.312 ] 00:10:30.312 }' 00:10:30.312 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:30.312 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:30.312 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=351 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:30.569 "name": "raid_bdev1", 00:10:30.569 "uuid": "aabb0a16-26f9-46ae-9f4a-047723ca0087", 00:10:30.569 "strip_size_kb": 0, 00:10:30.569 "state": "online", 00:10:30.569 "raid_level": "raid1", 00:10:30.569 "superblock": false, 00:10:30.569 "num_base_bdevs": 4, 00:10:30.569 "num_base_bdevs_discovered": 3, 00:10:30.569 "num_base_bdevs_operational": 3, 00:10:30.569 "process": { 00:10:30.569 "type": "rebuild", 00:10:30.569 "target": "spare", 00:10:30.569 "progress": { 00:10:30.569 "blocks": 24576, 00:10:30.569 "percent": 37 00:10:30.569 } 00:10:30.569 }, 00:10:30.569 "base_bdevs_list": [ 00:10:30.569 { 00:10:30.569 "name": "spare", 00:10:30.569 "uuid": "9090da1f-9983-591e-b053-fb692126c359", 00:10:30.569 "is_configured": true, 00:10:30.569 "data_offset": 0, 00:10:30.569 "data_size": 65536 00:10:30.569 }, 00:10:30.569 { 00:10:30.569 "name": null, 00:10:30.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.569 "is_configured": false, 00:10:30.569 "data_offset": 0, 00:10:30.569 "data_size": 65536 00:10:30.569 }, 00:10:30.569 { 00:10:30.569 "name": "BaseBdev3", 00:10:30.569 "uuid": "dfe9f412-be52-574f-80fa-fe66e817d5fc", 00:10:30.569 "is_configured": true, 00:10:30.569 "data_offset": 0, 00:10:30.569 "data_size": 65536 00:10:30.569 }, 00:10:30.569 { 00:10:30.569 "name": "BaseBdev4", 00:10:30.569 "uuid": "8a623b96-6675-52bf-9356-b580547d0698", 00:10:30.569 "is_configured": true, 00:10:30.569 "data_offset": 0, 00:10:30.569 "data_size": 65536 00:10:30.569 } 00:10:30.569 ] 00:10:30.569 }' 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:30.569 19:51:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:31.501 "name": "raid_bdev1", 00:10:31.501 "uuid": "aabb0a16-26f9-46ae-9f4a-047723ca0087", 00:10:31.501 "strip_size_kb": 0, 00:10:31.501 "state": "online", 00:10:31.501 "raid_level": "raid1", 00:10:31.501 "superblock": false, 00:10:31.501 "num_base_bdevs": 4, 00:10:31.501 "num_base_bdevs_discovered": 3, 00:10:31.501 "num_base_bdevs_operational": 3, 00:10:31.501 "process": { 00:10:31.501 "type": "rebuild", 00:10:31.501 "target": "spare", 00:10:31.501 "progress": { 00:10:31.501 "blocks": 45056, 00:10:31.501 "percent": 68 00:10:31.501 } 00:10:31.501 }, 00:10:31.501 "base_bdevs_list": [ 00:10:31.501 { 00:10:31.501 "name": "spare", 00:10:31.501 "uuid": "9090da1f-9983-591e-b053-fb692126c359", 00:10:31.501 "is_configured": true, 00:10:31.501 "data_offset": 0, 00:10:31.501 "data_size": 65536 00:10:31.501 }, 00:10:31.501 { 00:10:31.501 "name": null, 00:10:31.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.501 "is_configured": false, 00:10:31.501 "data_offset": 0, 00:10:31.501 "data_size": 65536 00:10:31.501 }, 00:10:31.501 { 00:10:31.501 "name": "BaseBdev3", 00:10:31.501 "uuid": "dfe9f412-be52-574f-80fa-fe66e817d5fc", 00:10:31.501 "is_configured": true, 00:10:31.501 "data_offset": 0, 00:10:31.501 "data_size": 65536 00:10:31.501 }, 00:10:31.501 { 00:10:31.501 "name": "BaseBdev4", 00:10:31.501 "uuid": "8a623b96-6675-52bf-9356-b580547d0698", 00:10:31.501 "is_configured": true, 00:10:31.501 "data_offset": 0, 00:10:31.501 "data_size": 65536 00:10:31.501 } 00:10:31.501 ] 00:10:31.501 }' 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:31.501 19:51:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:31.758 19:51:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:31.758 19:51:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:32.384 [2024-11-26 19:51:23.274908] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:32.384 [2024-11-26 19:51:23.275003] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:32.384 [2024-11-26 19:51:23.275050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:32.643 "name": "raid_bdev1", 00:10:32.643 "uuid": "aabb0a16-26f9-46ae-9f4a-047723ca0087", 00:10:32.643 "strip_size_kb": 0, 00:10:32.643 "state": "online", 00:10:32.643 "raid_level": "raid1", 00:10:32.643 "superblock": false, 00:10:32.643 "num_base_bdevs": 4, 00:10:32.643 "num_base_bdevs_discovered": 3, 00:10:32.643 "num_base_bdevs_operational": 3, 00:10:32.643 "base_bdevs_list": [ 00:10:32.643 { 00:10:32.643 "name": "spare", 00:10:32.643 "uuid": "9090da1f-9983-591e-b053-fb692126c359", 00:10:32.643 "is_configured": true, 00:10:32.643 "data_offset": 0, 00:10:32.643 "data_size": 65536 00:10:32.643 }, 00:10:32.643 { 00:10:32.643 "name": null, 00:10:32.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.643 "is_configured": false, 00:10:32.643 "data_offset": 0, 00:10:32.643 "data_size": 65536 00:10:32.643 }, 00:10:32.643 { 00:10:32.643 "name": "BaseBdev3", 00:10:32.643 "uuid": "dfe9f412-be52-574f-80fa-fe66e817d5fc", 00:10:32.643 "is_configured": true, 00:10:32.643 "data_offset": 0, 00:10:32.643 "data_size": 65536 00:10:32.643 }, 00:10:32.643 { 00:10:32.643 "name": "BaseBdev4", 00:10:32.643 "uuid": "8a623b96-6675-52bf-9356-b580547d0698", 00:10:32.643 "is_configured": true, 00:10:32.643 "data_offset": 0, 00:10:32.643 "data_size": 65536 00:10:32.643 } 00:10:32.643 ] 00:10:32.643 }' 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.643 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:32.902 "name": "raid_bdev1", 00:10:32.902 "uuid": "aabb0a16-26f9-46ae-9f4a-047723ca0087", 00:10:32.902 "strip_size_kb": 0, 00:10:32.902 "state": "online", 00:10:32.902 "raid_level": "raid1", 00:10:32.902 "superblock": false, 00:10:32.902 "num_base_bdevs": 4, 00:10:32.902 "num_base_bdevs_discovered": 3, 00:10:32.902 "num_base_bdevs_operational": 3, 00:10:32.902 "base_bdevs_list": [ 00:10:32.902 { 00:10:32.902 "name": "spare", 00:10:32.902 "uuid": "9090da1f-9983-591e-b053-fb692126c359", 00:10:32.902 "is_configured": true, 00:10:32.902 "data_offset": 0, 00:10:32.902 "data_size": 65536 00:10:32.902 }, 00:10:32.902 { 00:10:32.902 "name": null, 00:10:32.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.902 "is_configured": false, 00:10:32.902 "data_offset": 0, 00:10:32.902 "data_size": 65536 00:10:32.902 }, 00:10:32.902 { 00:10:32.902 "name": "BaseBdev3", 00:10:32.902 "uuid": "dfe9f412-be52-574f-80fa-fe66e817d5fc", 00:10:32.902 "is_configured": true, 00:10:32.902 "data_offset": 0, 00:10:32.902 "data_size": 65536 00:10:32.902 }, 00:10:32.902 { 00:10:32.902 "name": "BaseBdev4", 00:10:32.902 "uuid": "8a623b96-6675-52bf-9356-b580547d0698", 00:10:32.902 "is_configured": true, 00:10:32.902 "data_offset": 0, 00:10:32.902 "data_size": 65536 00:10:32.902 } 00:10:32.902 ] 00:10:32.902 }' 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.902 "name": "raid_bdev1", 00:10:32.902 "uuid": "aabb0a16-26f9-46ae-9f4a-047723ca0087", 00:10:32.902 "strip_size_kb": 0, 00:10:32.902 "state": "online", 00:10:32.902 "raid_level": "raid1", 00:10:32.902 "superblock": false, 00:10:32.902 "num_base_bdevs": 4, 00:10:32.902 "num_base_bdevs_discovered": 3, 00:10:32.902 "num_base_bdevs_operational": 3, 00:10:32.902 "base_bdevs_list": [ 00:10:32.902 { 00:10:32.902 "name": "spare", 00:10:32.902 "uuid": "9090da1f-9983-591e-b053-fb692126c359", 00:10:32.902 "is_configured": true, 00:10:32.902 "data_offset": 0, 00:10:32.902 "data_size": 65536 00:10:32.902 }, 00:10:32.902 { 00:10:32.902 "name": null, 00:10:32.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.902 "is_configured": false, 00:10:32.902 "data_offset": 0, 00:10:32.902 "data_size": 65536 00:10:32.902 }, 00:10:32.902 { 00:10:32.902 "name": "BaseBdev3", 00:10:32.902 "uuid": "dfe9f412-be52-574f-80fa-fe66e817d5fc", 00:10:32.902 "is_configured": true, 00:10:32.902 "data_offset": 0, 00:10:32.902 "data_size": 65536 00:10:32.902 }, 00:10:32.902 { 00:10:32.902 "name": "BaseBdev4", 00:10:32.902 "uuid": "8a623b96-6675-52bf-9356-b580547d0698", 00:10:32.902 "is_configured": true, 00:10:32.902 "data_offset": 0, 00:10:32.902 "data_size": 65536 00:10:32.902 } 00:10:32.902 ] 00:10:32.902 }' 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.902 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.162 [2024-11-26 19:51:23.955605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.162 [2024-11-26 19:51:23.955637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.162 [2024-11-26 19:51:23.955720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.162 [2024-11-26 19:51:23.955799] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.162 [2024-11-26 19:51:23.955810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:33.162 19:51:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:33.421 /dev/nbd0 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:33.421 1+0 records in 00:10:33.421 1+0 records out 00:10:33.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347624 s, 11.8 MB/s 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:33.421 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:33.679 /dev/nbd1 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:33.679 1+0 records in 00:10:33.679 1+0 records out 00:10:33.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366953 s, 11.2 MB/s 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:33.679 19:51:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:33.937 19:51:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75415 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75415 ']' 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75415 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75415 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.196 killing process with pid 75415 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75415' 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75415 00:10:34.196 Received shutdown signal, test time was about 60.000000 seconds 00:10:34.196 00:10:34.196 Latency(us) 00:10:34.196 [2024-11-26T19:51:25.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.196 [2024-11-26T19:51:25.133Z] =================================================================================================================== 00:10:34.196 [2024-11-26T19:51:25.133Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:34.196 19:51:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75415 00:10:34.196 [2024-11-26 19:51:25.081291] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.761 [2024-11-26 19:51:25.399290] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.326 19:51:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:10:35.327 00:10:35.327 real 0m15.621s 00:10:35.327 user 0m16.817s 00:10:35.327 sys 0m2.844s 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.327 ************************************ 00:10:35.327 END TEST raid_rebuild_test 00:10:35.327 ************************************ 00:10:35.327 19:51:26 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:10:35.327 19:51:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:35.327 19:51:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.327 19:51:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.327 ************************************ 00:10:35.327 START TEST raid_rebuild_test_sb 00:10:35.327 ************************************ 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75846 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75846 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75846 ']' 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.327 19:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.584 [2024-11-26 19:51:26.287467] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:10:35.584 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:35.584 Zero copy mechanism will not be used. 00:10:35.584 [2024-11-26 19:51:26.287583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75846 ] 00:10:35.584 [2024-11-26 19:51:26.436607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.841 [2024-11-26 19:51:26.537701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.841 [2024-11-26 19:51:26.656857] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.841 [2024-11-26 19:51:26.656898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.407 BaseBdev1_malloc 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.407 [2024-11-26 19:51:27.162261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:36.407 [2024-11-26 19:51:27.162321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.407 [2024-11-26 19:51:27.162351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:36.407 [2024-11-26 19:51:27.162361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.407 [2024-11-26 19:51:27.164248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.407 [2024-11-26 19:51:27.164282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:36.407 BaseBdev1 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.407 BaseBdev2_malloc 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.407 [2024-11-26 19:51:27.196018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:36.407 [2024-11-26 19:51:27.196064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.407 [2024-11-26 19:51:27.196081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:36.407 [2024-11-26 19:51:27.196090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.407 [2024-11-26 19:51:27.197908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.407 [2024-11-26 19:51:27.197936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:36.407 BaseBdev2 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.407 BaseBdev3_malloc 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.407 [2024-11-26 19:51:27.250770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:10:36.407 [2024-11-26 19:51:27.250813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.407 [2024-11-26 19:51:27.250832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:36.407 [2024-11-26 19:51:27.250842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.407 [2024-11-26 19:51:27.252705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.407 [2024-11-26 19:51:27.252735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:36.407 BaseBdev3 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.407 BaseBdev4_malloc 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.407 [2024-11-26 19:51:27.292308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:10:36.407 [2024-11-26 19:51:27.292361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.407 [2024-11-26 19:51:27.292376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:36.407 [2024-11-26 19:51:27.292385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.407 [2024-11-26 19:51:27.294193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.407 [2024-11-26 19:51:27.294225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:36.407 BaseBdev4 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.407 spare_malloc 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.407 spare_delay 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.407 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.407 [2024-11-26 19:51:27.337854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:36.407 [2024-11-26 19:51:27.337896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.407 [2024-11-26 19:51:27.337911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:36.407 [2024-11-26 19:51:27.337921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.408 [2024-11-26 19:51:27.339761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.408 [2024-11-26 19:51:27.339792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:36.735 spare 00:10:36.735 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.735 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:10:36.735 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.735 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.735 [2024-11-26 19:51:27.345906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.735 [2024-11-26 19:51:27.347628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.736 [2024-11-26 19:51:27.347687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.736 [2024-11-26 19:51:27.347731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:36.736 [2024-11-26 19:51:27.347893] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:36.736 [2024-11-26 19:51:27.347908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:36.736 [2024-11-26 19:51:27.348128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:36.736 [2024-11-26 19:51:27.348269] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:36.736 [2024-11-26 19:51:27.348281] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:36.736 [2024-11-26 19:51:27.348412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.736 "name": "raid_bdev1", 00:10:36.736 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:36.736 "strip_size_kb": 0, 00:10:36.736 "state": "online", 00:10:36.736 "raid_level": "raid1", 00:10:36.736 "superblock": true, 00:10:36.736 "num_base_bdevs": 4, 00:10:36.736 "num_base_bdevs_discovered": 4, 00:10:36.736 "num_base_bdevs_operational": 4, 00:10:36.736 "base_bdevs_list": [ 00:10:36.736 { 00:10:36.736 "name": "BaseBdev1", 00:10:36.736 "uuid": "a4ff62c6-b19a-5803-91dc-07f71c7a933e", 00:10:36.736 "is_configured": true, 00:10:36.736 "data_offset": 2048, 00:10:36.736 "data_size": 63488 00:10:36.736 }, 00:10:36.736 { 00:10:36.736 "name": "BaseBdev2", 00:10:36.736 "uuid": "d0a09055-1c7e-51b6-95d0-0b4ddd2db27f", 00:10:36.736 "is_configured": true, 00:10:36.736 "data_offset": 2048, 00:10:36.736 "data_size": 63488 00:10:36.736 }, 00:10:36.736 { 00:10:36.736 "name": "BaseBdev3", 00:10:36.736 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:36.736 "is_configured": true, 00:10:36.736 "data_offset": 2048, 00:10:36.736 "data_size": 63488 00:10:36.736 }, 00:10:36.736 { 00:10:36.736 "name": "BaseBdev4", 00:10:36.736 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:36.736 "is_configured": true, 00:10:36.736 "data_offset": 2048, 00:10:36.736 "data_size": 63488 00:10:36.736 } 00:10:36.736 ] 00:10:36.736 }' 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.736 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.995 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.995 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.995 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.995 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:36.995 [2024-11-26 19:51:27.674287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.995 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.995 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:36.996 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:36.996 [2024-11-26 19:51:27.918090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:10:37.254 /dev/nbd0 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:37.254 1+0 records in 00:10:37.254 1+0 records out 00:10:37.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196364 s, 20.9 MB/s 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:37.254 19:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:10:42.515 63488+0 records in 00:10:42.515 63488+0 records out 00:10:42.515 32505856 bytes (33 MB, 31 MiB) copied, 5.12365 s, 6.3 MB/s 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:42.515 [2024-11-26 19:51:33.309390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.515 [2024-11-26 19:51:33.317461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.515 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.515 "name": "raid_bdev1", 00:10:42.515 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:42.515 "strip_size_kb": 0, 00:10:42.515 "state": "online", 00:10:42.515 "raid_level": "raid1", 00:10:42.515 "superblock": true, 00:10:42.515 "num_base_bdevs": 4, 00:10:42.515 "num_base_bdevs_discovered": 3, 00:10:42.515 "num_base_bdevs_operational": 3, 00:10:42.515 "base_bdevs_list": [ 00:10:42.515 { 00:10:42.515 "name": null, 00:10:42.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.515 "is_configured": false, 00:10:42.515 "data_offset": 0, 00:10:42.515 "data_size": 63488 00:10:42.515 }, 00:10:42.515 { 00:10:42.515 "name": "BaseBdev2", 00:10:42.515 "uuid": "d0a09055-1c7e-51b6-95d0-0b4ddd2db27f", 00:10:42.515 "is_configured": true, 00:10:42.515 "data_offset": 2048, 00:10:42.515 "data_size": 63488 00:10:42.515 }, 00:10:42.515 { 00:10:42.515 "name": "BaseBdev3", 00:10:42.515 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:42.515 "is_configured": true, 00:10:42.515 "data_offset": 2048, 00:10:42.515 "data_size": 63488 00:10:42.515 }, 00:10:42.515 { 00:10:42.515 "name": "BaseBdev4", 00:10:42.515 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:42.515 "is_configured": true, 00:10:42.515 "data_offset": 2048, 00:10:42.515 "data_size": 63488 00:10:42.515 } 00:10:42.515 ] 00:10:42.515 }' 00:10:42.516 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.516 19:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.773 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:42.773 19:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.773 19:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.773 [2024-11-26 19:51:33.629551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:42.773 [2024-11-26 19:51:33.637820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:10:42.773 19:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.773 19:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:42.773 [2024-11-26 19:51:33.639541] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:44.147 "name": "raid_bdev1", 00:10:44.147 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:44.147 "strip_size_kb": 0, 00:10:44.147 "state": "online", 00:10:44.147 "raid_level": "raid1", 00:10:44.147 "superblock": true, 00:10:44.147 "num_base_bdevs": 4, 00:10:44.147 "num_base_bdevs_discovered": 4, 00:10:44.147 "num_base_bdevs_operational": 4, 00:10:44.147 "process": { 00:10:44.147 "type": "rebuild", 00:10:44.147 "target": "spare", 00:10:44.147 "progress": { 00:10:44.147 "blocks": 20480, 00:10:44.147 "percent": 32 00:10:44.147 } 00:10:44.147 }, 00:10:44.147 "base_bdevs_list": [ 00:10:44.147 { 00:10:44.147 "name": "spare", 00:10:44.147 "uuid": "6e6e17bd-38f6-5005-a53f-d99a4022e157", 00:10:44.147 "is_configured": true, 00:10:44.147 "data_offset": 2048, 00:10:44.147 "data_size": 63488 00:10:44.147 }, 00:10:44.147 { 00:10:44.147 "name": "BaseBdev2", 00:10:44.147 "uuid": "d0a09055-1c7e-51b6-95d0-0b4ddd2db27f", 00:10:44.147 "is_configured": true, 00:10:44.147 "data_offset": 2048, 00:10:44.147 "data_size": 63488 00:10:44.147 }, 00:10:44.147 { 00:10:44.147 "name": "BaseBdev3", 00:10:44.147 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:44.147 "is_configured": true, 00:10:44.147 "data_offset": 2048, 00:10:44.147 "data_size": 63488 00:10:44.147 }, 00:10:44.147 { 00:10:44.147 "name": "BaseBdev4", 00:10:44.147 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:44.147 "is_configured": true, 00:10:44.147 "data_offset": 2048, 00:10:44.147 "data_size": 63488 00:10:44.147 } 00:10:44.147 ] 00:10:44.147 }' 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.147 [2024-11-26 19:51:34.757713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:44.147 [2024-11-26 19:51:34.846812] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:44.147 [2024-11-26 19:51:34.846886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.147 [2024-11-26 19:51:34.846902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:44.147 [2024-11-26 19:51:34.846911] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.147 "name": "raid_bdev1", 00:10:44.147 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:44.147 "strip_size_kb": 0, 00:10:44.147 "state": "online", 00:10:44.147 "raid_level": "raid1", 00:10:44.147 "superblock": true, 00:10:44.147 "num_base_bdevs": 4, 00:10:44.147 "num_base_bdevs_discovered": 3, 00:10:44.147 "num_base_bdevs_operational": 3, 00:10:44.147 "base_bdevs_list": [ 00:10:44.147 { 00:10:44.147 "name": null, 00:10:44.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.147 "is_configured": false, 00:10:44.147 "data_offset": 0, 00:10:44.147 "data_size": 63488 00:10:44.147 }, 00:10:44.147 { 00:10:44.147 "name": "BaseBdev2", 00:10:44.147 "uuid": "d0a09055-1c7e-51b6-95d0-0b4ddd2db27f", 00:10:44.147 "is_configured": true, 00:10:44.147 "data_offset": 2048, 00:10:44.147 "data_size": 63488 00:10:44.147 }, 00:10:44.147 { 00:10:44.147 "name": "BaseBdev3", 00:10:44.147 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:44.147 "is_configured": true, 00:10:44.147 "data_offset": 2048, 00:10:44.147 "data_size": 63488 00:10:44.147 }, 00:10:44.147 { 00:10:44.147 "name": "BaseBdev4", 00:10:44.147 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:44.147 "is_configured": true, 00:10:44.147 "data_offset": 2048, 00:10:44.147 "data_size": 63488 00:10:44.147 } 00:10:44.147 ] 00:10:44.147 }' 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.147 19:51:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:44.406 "name": "raid_bdev1", 00:10:44.406 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:44.406 "strip_size_kb": 0, 00:10:44.406 "state": "online", 00:10:44.406 "raid_level": "raid1", 00:10:44.406 "superblock": true, 00:10:44.406 "num_base_bdevs": 4, 00:10:44.406 "num_base_bdevs_discovered": 3, 00:10:44.406 "num_base_bdevs_operational": 3, 00:10:44.406 "base_bdevs_list": [ 00:10:44.406 { 00:10:44.406 "name": null, 00:10:44.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.406 "is_configured": false, 00:10:44.406 "data_offset": 0, 00:10:44.406 "data_size": 63488 00:10:44.406 }, 00:10:44.406 { 00:10:44.406 "name": "BaseBdev2", 00:10:44.406 "uuid": "d0a09055-1c7e-51b6-95d0-0b4ddd2db27f", 00:10:44.406 "is_configured": true, 00:10:44.406 "data_offset": 2048, 00:10:44.406 "data_size": 63488 00:10:44.406 }, 00:10:44.406 { 00:10:44.406 "name": "BaseBdev3", 00:10:44.406 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:44.406 "is_configured": true, 00:10:44.406 "data_offset": 2048, 00:10:44.406 "data_size": 63488 00:10:44.406 }, 00:10:44.406 { 00:10:44.406 "name": "BaseBdev4", 00:10:44.406 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:44.406 "is_configured": true, 00:10:44.406 "data_offset": 2048, 00:10:44.406 "data_size": 63488 00:10:44.406 } 00:10:44.406 ] 00:10:44.406 }' 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 [2024-11-26 19:51:35.275666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:44.406 [2024-11-26 19:51:35.283374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.406 19:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:44.406 [2024-11-26 19:51:35.285078] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:45.780 "name": "raid_bdev1", 00:10:45.780 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:45.780 "strip_size_kb": 0, 00:10:45.780 "state": "online", 00:10:45.780 "raid_level": "raid1", 00:10:45.780 "superblock": true, 00:10:45.780 "num_base_bdevs": 4, 00:10:45.780 "num_base_bdevs_discovered": 4, 00:10:45.780 "num_base_bdevs_operational": 4, 00:10:45.780 "process": { 00:10:45.780 "type": "rebuild", 00:10:45.780 "target": "spare", 00:10:45.780 "progress": { 00:10:45.780 "blocks": 20480, 00:10:45.780 "percent": 32 00:10:45.780 } 00:10:45.780 }, 00:10:45.780 "base_bdevs_list": [ 00:10:45.780 { 00:10:45.780 "name": "spare", 00:10:45.780 "uuid": "6e6e17bd-38f6-5005-a53f-d99a4022e157", 00:10:45.780 "is_configured": true, 00:10:45.780 "data_offset": 2048, 00:10:45.780 "data_size": 63488 00:10:45.780 }, 00:10:45.780 { 00:10:45.780 "name": "BaseBdev2", 00:10:45.780 "uuid": "d0a09055-1c7e-51b6-95d0-0b4ddd2db27f", 00:10:45.780 "is_configured": true, 00:10:45.780 "data_offset": 2048, 00:10:45.780 "data_size": 63488 00:10:45.780 }, 00:10:45.780 { 00:10:45.780 "name": "BaseBdev3", 00:10:45.780 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:45.780 "is_configured": true, 00:10:45.780 "data_offset": 2048, 00:10:45.780 "data_size": 63488 00:10:45.780 }, 00:10:45.780 { 00:10:45.780 "name": "BaseBdev4", 00:10:45.780 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:45.780 "is_configured": true, 00:10:45.780 "data_offset": 2048, 00:10:45.780 "data_size": 63488 00:10:45.780 } 00:10:45.780 ] 00:10:45.780 }' 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:10:45.780 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.780 [2024-11-26 19:51:36.399187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:45.780 [2024-11-26 19:51:36.592431] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:45.780 "name": "raid_bdev1", 00:10:45.780 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:45.780 "strip_size_kb": 0, 00:10:45.780 "state": "online", 00:10:45.780 "raid_level": "raid1", 00:10:45.780 "superblock": true, 00:10:45.780 "num_base_bdevs": 4, 00:10:45.780 "num_base_bdevs_discovered": 3, 00:10:45.780 "num_base_bdevs_operational": 3, 00:10:45.780 "process": { 00:10:45.780 "type": "rebuild", 00:10:45.780 "target": "spare", 00:10:45.780 "progress": { 00:10:45.780 "blocks": 24576, 00:10:45.780 "percent": 38 00:10:45.780 } 00:10:45.780 }, 00:10:45.780 "base_bdevs_list": [ 00:10:45.780 { 00:10:45.780 "name": "spare", 00:10:45.780 "uuid": "6e6e17bd-38f6-5005-a53f-d99a4022e157", 00:10:45.780 "is_configured": true, 00:10:45.780 "data_offset": 2048, 00:10:45.780 "data_size": 63488 00:10:45.780 }, 00:10:45.780 { 00:10:45.780 "name": null, 00:10:45.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.780 "is_configured": false, 00:10:45.780 "data_offset": 0, 00:10:45.780 "data_size": 63488 00:10:45.780 }, 00:10:45.780 { 00:10:45.780 "name": "BaseBdev3", 00:10:45.780 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:45.780 "is_configured": true, 00:10:45.780 "data_offset": 2048, 00:10:45.780 "data_size": 63488 00:10:45.780 }, 00:10:45.780 { 00:10:45.780 "name": "BaseBdev4", 00:10:45.780 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:45.780 "is_configured": true, 00:10:45.780 "data_offset": 2048, 00:10:45.780 "data_size": 63488 00:10:45.780 } 00:10:45.780 ] 00:10:45.780 }' 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=366 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:45.780 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:45.781 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.781 19:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.781 19:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.781 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.781 19:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.038 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:46.038 "name": "raid_bdev1", 00:10:46.038 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:46.038 "strip_size_kb": 0, 00:10:46.038 "state": "online", 00:10:46.038 "raid_level": "raid1", 00:10:46.038 "superblock": true, 00:10:46.038 "num_base_bdevs": 4, 00:10:46.038 "num_base_bdevs_discovered": 3, 00:10:46.038 "num_base_bdevs_operational": 3, 00:10:46.038 "process": { 00:10:46.038 "type": "rebuild", 00:10:46.038 "target": "spare", 00:10:46.038 "progress": { 00:10:46.038 "blocks": 26624, 00:10:46.038 "percent": 41 00:10:46.038 } 00:10:46.038 }, 00:10:46.038 "base_bdevs_list": [ 00:10:46.038 { 00:10:46.038 "name": "spare", 00:10:46.038 "uuid": "6e6e17bd-38f6-5005-a53f-d99a4022e157", 00:10:46.038 "is_configured": true, 00:10:46.038 "data_offset": 2048, 00:10:46.038 "data_size": 63488 00:10:46.038 }, 00:10:46.038 { 00:10:46.038 "name": null, 00:10:46.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.038 "is_configured": false, 00:10:46.038 "data_offset": 0, 00:10:46.038 "data_size": 63488 00:10:46.038 }, 00:10:46.038 { 00:10:46.038 "name": "BaseBdev3", 00:10:46.038 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:46.038 "is_configured": true, 00:10:46.038 "data_offset": 2048, 00:10:46.038 "data_size": 63488 00:10:46.038 }, 00:10:46.038 { 00:10:46.038 "name": "BaseBdev4", 00:10:46.038 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:46.038 "is_configured": true, 00:10:46.038 "data_offset": 2048, 00:10:46.038 "data_size": 63488 00:10:46.038 } 00:10:46.038 ] 00:10:46.038 }' 00:10:46.038 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:46.038 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:46.038 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:46.038 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:46.038 19:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:46.973 "name": "raid_bdev1", 00:10:46.973 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:46.973 "strip_size_kb": 0, 00:10:46.973 "state": "online", 00:10:46.973 "raid_level": "raid1", 00:10:46.973 "superblock": true, 00:10:46.973 "num_base_bdevs": 4, 00:10:46.973 "num_base_bdevs_discovered": 3, 00:10:46.973 "num_base_bdevs_operational": 3, 00:10:46.973 "process": { 00:10:46.973 "type": "rebuild", 00:10:46.973 "target": "spare", 00:10:46.973 "progress": { 00:10:46.973 "blocks": 49152, 00:10:46.973 "percent": 77 00:10:46.973 } 00:10:46.973 }, 00:10:46.973 "base_bdevs_list": [ 00:10:46.973 { 00:10:46.973 "name": "spare", 00:10:46.973 "uuid": "6e6e17bd-38f6-5005-a53f-d99a4022e157", 00:10:46.973 "is_configured": true, 00:10:46.973 "data_offset": 2048, 00:10:46.973 "data_size": 63488 00:10:46.973 }, 00:10:46.973 { 00:10:46.973 "name": null, 00:10:46.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.973 "is_configured": false, 00:10:46.973 "data_offset": 0, 00:10:46.973 "data_size": 63488 00:10:46.973 }, 00:10:46.973 { 00:10:46.973 "name": "BaseBdev3", 00:10:46.973 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:46.973 "is_configured": true, 00:10:46.973 "data_offset": 2048, 00:10:46.973 "data_size": 63488 00:10:46.973 }, 00:10:46.973 { 00:10:46.973 "name": "BaseBdev4", 00:10:46.973 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:46.973 "is_configured": true, 00:10:46.973 "data_offset": 2048, 00:10:46.973 "data_size": 63488 00:10:46.973 } 00:10:46.973 ] 00:10:46.973 }' 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:46.973 19:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:46.974 19:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:47.936 [2024-11-26 19:51:38.503865] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:47.936 [2024-11-26 19:51:38.503960] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:47.936 [2024-11-26 19:51:38.504092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:48.195 "name": "raid_bdev1", 00:10:48.195 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:48.195 "strip_size_kb": 0, 00:10:48.195 "state": "online", 00:10:48.195 "raid_level": "raid1", 00:10:48.195 "superblock": true, 00:10:48.195 "num_base_bdevs": 4, 00:10:48.195 "num_base_bdevs_discovered": 3, 00:10:48.195 "num_base_bdevs_operational": 3, 00:10:48.195 "base_bdevs_list": [ 00:10:48.195 { 00:10:48.195 "name": "spare", 00:10:48.195 "uuid": "6e6e17bd-38f6-5005-a53f-d99a4022e157", 00:10:48.195 "is_configured": true, 00:10:48.195 "data_offset": 2048, 00:10:48.195 "data_size": 63488 00:10:48.195 }, 00:10:48.195 { 00:10:48.195 "name": null, 00:10:48.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.195 "is_configured": false, 00:10:48.195 "data_offset": 0, 00:10:48.195 "data_size": 63488 00:10:48.195 }, 00:10:48.195 { 00:10:48.195 "name": "BaseBdev3", 00:10:48.195 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:48.195 "is_configured": true, 00:10:48.195 "data_offset": 2048, 00:10:48.195 "data_size": 63488 00:10:48.195 }, 00:10:48.195 { 00:10:48.195 "name": "BaseBdev4", 00:10:48.195 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:48.195 "is_configured": true, 00:10:48.195 "data_offset": 2048, 00:10:48.195 "data_size": 63488 00:10:48.195 } 00:10:48.195 ] 00:10:48.195 }' 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:48.195 19:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:48.195 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:48.195 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:10:48.195 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:48.195 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:48.196 "name": "raid_bdev1", 00:10:48.196 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:48.196 "strip_size_kb": 0, 00:10:48.196 "state": "online", 00:10:48.196 "raid_level": "raid1", 00:10:48.196 "superblock": true, 00:10:48.196 "num_base_bdevs": 4, 00:10:48.196 "num_base_bdevs_discovered": 3, 00:10:48.196 "num_base_bdevs_operational": 3, 00:10:48.196 "base_bdevs_list": [ 00:10:48.196 { 00:10:48.196 "name": "spare", 00:10:48.196 "uuid": "6e6e17bd-38f6-5005-a53f-d99a4022e157", 00:10:48.196 "is_configured": true, 00:10:48.196 "data_offset": 2048, 00:10:48.196 "data_size": 63488 00:10:48.196 }, 00:10:48.196 { 00:10:48.196 "name": null, 00:10:48.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.196 "is_configured": false, 00:10:48.196 "data_offset": 0, 00:10:48.196 "data_size": 63488 00:10:48.196 }, 00:10:48.196 { 00:10:48.196 "name": "BaseBdev3", 00:10:48.196 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:48.196 "is_configured": true, 00:10:48.196 "data_offset": 2048, 00:10:48.196 "data_size": 63488 00:10:48.196 }, 00:10:48.196 { 00:10:48.196 "name": "BaseBdev4", 00:10:48.196 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:48.196 "is_configured": true, 00:10:48.196 "data_offset": 2048, 00:10:48.196 "data_size": 63488 00:10:48.196 } 00:10:48.196 ] 00:10:48.196 }' 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.196 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.454 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.454 "name": "raid_bdev1", 00:10:48.454 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:48.454 "strip_size_kb": 0, 00:10:48.454 "state": "online", 00:10:48.454 "raid_level": "raid1", 00:10:48.454 "superblock": true, 00:10:48.454 "num_base_bdevs": 4, 00:10:48.454 "num_base_bdevs_discovered": 3, 00:10:48.454 "num_base_bdevs_operational": 3, 00:10:48.454 "base_bdevs_list": [ 00:10:48.454 { 00:10:48.454 "name": "spare", 00:10:48.454 "uuid": "6e6e17bd-38f6-5005-a53f-d99a4022e157", 00:10:48.454 "is_configured": true, 00:10:48.454 "data_offset": 2048, 00:10:48.454 "data_size": 63488 00:10:48.454 }, 00:10:48.454 { 00:10:48.454 "name": null, 00:10:48.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.454 "is_configured": false, 00:10:48.454 "data_offset": 0, 00:10:48.454 "data_size": 63488 00:10:48.454 }, 00:10:48.454 { 00:10:48.454 "name": "BaseBdev3", 00:10:48.454 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:48.454 "is_configured": true, 00:10:48.454 "data_offset": 2048, 00:10:48.454 "data_size": 63488 00:10:48.454 }, 00:10:48.454 { 00:10:48.454 "name": "BaseBdev4", 00:10:48.454 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:48.454 "is_configured": true, 00:10:48.454 "data_offset": 2048, 00:10:48.454 "data_size": 63488 00:10:48.454 } 00:10:48.454 ] 00:10:48.454 }' 00:10:48.454 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.454 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.713 [2024-11-26 19:51:39.429062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:48.713 [2024-11-26 19:51:39.429095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.713 [2024-11-26 19:51:39.429182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.713 [2024-11-26 19:51:39.429256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.713 [2024-11-26 19:51:39.429265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:48.713 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:48.972 /dev/nbd0 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:48.972 1+0 records in 00:10:48.972 1+0 records out 00:10:48.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241661 s, 16.9 MB/s 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:48.972 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:48.972 /dev/nbd1 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:49.231 1+0 records in 00:10:49.231 1+0 records out 00:10:49.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375693 s, 10.9 MB/s 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:49.231 19:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:10:49.231 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:49.231 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:49.231 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.231 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:49.231 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:49.231 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.231 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:49.490 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:49.490 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:49.490 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:49.490 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.490 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.490 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:49.490 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:49.490 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.490 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.490 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.749 [2024-11-26 19:51:40.509510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:49.749 [2024-11-26 19:51:40.509559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.749 [2024-11-26 19:51:40.509582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:49.749 [2024-11-26 19:51:40.509591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.749 [2024-11-26 19:51:40.511623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.749 [2024-11-26 19:51:40.511654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:49.749 [2024-11-26 19:51:40.511743] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:49.749 [2024-11-26 19:51:40.511784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:49.749 [2024-11-26 19:51:40.511901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.749 [2024-11-26 19:51:40.511984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:49.749 spare 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.749 [2024-11-26 19:51:40.612074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:49.749 [2024-11-26 19:51:40.612120] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:49.749 [2024-11-26 19:51:40.612462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:10:49.749 [2024-11-26 19:51:40.612659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:49.749 [2024-11-26 19:51:40.612704] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:49.749 [2024-11-26 19:51:40.612863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.749 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.750 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.750 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.750 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.750 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.750 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.750 "name": "raid_bdev1", 00:10:49.750 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:49.750 "strip_size_kb": 0, 00:10:49.750 "state": "online", 00:10:49.750 "raid_level": "raid1", 00:10:49.750 "superblock": true, 00:10:49.750 "num_base_bdevs": 4, 00:10:49.750 "num_base_bdevs_discovered": 3, 00:10:49.750 "num_base_bdevs_operational": 3, 00:10:49.750 "base_bdevs_list": [ 00:10:49.750 { 00:10:49.750 "name": "spare", 00:10:49.750 "uuid": "6e6e17bd-38f6-5005-a53f-d99a4022e157", 00:10:49.750 "is_configured": true, 00:10:49.750 "data_offset": 2048, 00:10:49.750 "data_size": 63488 00:10:49.750 }, 00:10:49.750 { 00:10:49.750 "name": null, 00:10:49.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.750 "is_configured": false, 00:10:49.750 "data_offset": 2048, 00:10:49.750 "data_size": 63488 00:10:49.750 }, 00:10:49.750 { 00:10:49.750 "name": "BaseBdev3", 00:10:49.750 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:49.750 "is_configured": true, 00:10:49.750 "data_offset": 2048, 00:10:49.750 "data_size": 63488 00:10:49.750 }, 00:10:49.750 { 00:10:49.750 "name": "BaseBdev4", 00:10:49.750 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:49.750 "is_configured": true, 00:10:49.750 "data_offset": 2048, 00:10:49.750 "data_size": 63488 00:10:49.750 } 00:10:49.750 ] 00:10:49.750 }' 00:10:49.750 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.750 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.008 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:50.008 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:50.008 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:50.008 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:50.008 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:50.008 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.008 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.008 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.008 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.267 19:51:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.267 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:50.267 "name": "raid_bdev1", 00:10:50.267 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:50.267 "strip_size_kb": 0, 00:10:50.267 "state": "online", 00:10:50.267 "raid_level": "raid1", 00:10:50.267 "superblock": true, 00:10:50.267 "num_base_bdevs": 4, 00:10:50.267 "num_base_bdevs_discovered": 3, 00:10:50.267 "num_base_bdevs_operational": 3, 00:10:50.267 "base_bdevs_list": [ 00:10:50.267 { 00:10:50.267 "name": "spare", 00:10:50.267 "uuid": "6e6e17bd-38f6-5005-a53f-d99a4022e157", 00:10:50.267 "is_configured": true, 00:10:50.267 "data_offset": 2048, 00:10:50.267 "data_size": 63488 00:10:50.267 }, 00:10:50.267 { 00:10:50.267 "name": null, 00:10:50.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.267 "is_configured": false, 00:10:50.267 "data_offset": 2048, 00:10:50.267 "data_size": 63488 00:10:50.267 }, 00:10:50.267 { 00:10:50.267 "name": "BaseBdev3", 00:10:50.267 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:50.267 "is_configured": true, 00:10:50.267 "data_offset": 2048, 00:10:50.267 "data_size": 63488 00:10:50.267 }, 00:10:50.267 { 00:10:50.267 "name": "BaseBdev4", 00:10:50.267 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:50.267 "is_configured": true, 00:10:50.267 "data_offset": 2048, 00:10:50.267 "data_size": 63488 00:10:50.267 } 00:10:50.267 ] 00:10:50.267 }' 00:10:50.267 19:51:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.267 [2024-11-26 19:51:41.065669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.267 "name": "raid_bdev1", 00:10:50.267 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:50.267 "strip_size_kb": 0, 00:10:50.267 "state": "online", 00:10:50.267 "raid_level": "raid1", 00:10:50.267 "superblock": true, 00:10:50.267 "num_base_bdevs": 4, 00:10:50.267 "num_base_bdevs_discovered": 2, 00:10:50.267 "num_base_bdevs_operational": 2, 00:10:50.267 "base_bdevs_list": [ 00:10:50.267 { 00:10:50.267 "name": null, 00:10:50.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.267 "is_configured": false, 00:10:50.267 "data_offset": 0, 00:10:50.267 "data_size": 63488 00:10:50.267 }, 00:10:50.267 { 00:10:50.267 "name": null, 00:10:50.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.267 "is_configured": false, 00:10:50.267 "data_offset": 2048, 00:10:50.267 "data_size": 63488 00:10:50.267 }, 00:10:50.267 { 00:10:50.267 "name": "BaseBdev3", 00:10:50.267 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:50.267 "is_configured": true, 00:10:50.267 "data_offset": 2048, 00:10:50.267 "data_size": 63488 00:10:50.267 }, 00:10:50.267 { 00:10:50.267 "name": "BaseBdev4", 00:10:50.267 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:50.267 "is_configured": true, 00:10:50.267 "data_offset": 2048, 00:10:50.267 "data_size": 63488 00:10:50.267 } 00:10:50.267 ] 00:10:50.267 }' 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.267 19:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.525 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:50.525 19:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.525 19:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.525 [2024-11-26 19:51:41.389739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:50.526 [2024-11-26 19:51:41.389946] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:10:50.526 [2024-11-26 19:51:41.389964] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:50.526 [2024-11-26 19:51:41.389997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:50.526 [2024-11-26 19:51:41.397636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:10:50.526 19:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.526 19:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:10:50.526 [2024-11-26 19:51:41.399314] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:51.898 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:51.898 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:51.898 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:51.898 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:51.898 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:51.898 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.898 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.898 19:51:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.898 19:51:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.898 19:51:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.898 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:51.898 "name": "raid_bdev1", 00:10:51.898 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:51.898 "strip_size_kb": 0, 00:10:51.898 "state": "online", 00:10:51.898 "raid_level": "raid1", 00:10:51.898 "superblock": true, 00:10:51.898 "num_base_bdevs": 4, 00:10:51.898 "num_base_bdevs_discovered": 3, 00:10:51.898 "num_base_bdevs_operational": 3, 00:10:51.898 "process": { 00:10:51.898 "type": "rebuild", 00:10:51.898 "target": "spare", 00:10:51.898 "progress": { 00:10:51.898 "blocks": 20480, 00:10:51.898 "percent": 32 00:10:51.898 } 00:10:51.898 }, 00:10:51.898 "base_bdevs_list": [ 00:10:51.898 { 00:10:51.898 "name": "spare", 00:10:51.898 "uuid": "6e6e17bd-38f6-5005-a53f-d99a4022e157", 00:10:51.898 "is_configured": true, 00:10:51.898 "data_offset": 2048, 00:10:51.898 "data_size": 63488 00:10:51.898 }, 00:10:51.898 { 00:10:51.898 "name": null, 00:10:51.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.898 "is_configured": false, 00:10:51.898 "data_offset": 2048, 00:10:51.898 "data_size": 63488 00:10:51.898 }, 00:10:51.898 { 00:10:51.898 "name": "BaseBdev3", 00:10:51.898 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:51.898 "is_configured": true, 00:10:51.898 "data_offset": 2048, 00:10:51.898 "data_size": 63488 00:10:51.898 }, 00:10:51.898 { 00:10:51.898 "name": "BaseBdev4", 00:10:51.899 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:51.899 "is_configured": true, 00:10:51.899 "data_offset": 2048, 00:10:51.899 "data_size": 63488 00:10:51.899 } 00:10:51.899 ] 00:10:51.899 }' 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.899 [2024-11-26 19:51:42.505238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:51.899 [2024-11-26 19:51:42.505551] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:51.899 [2024-11-26 19:51:42.505593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.899 [2024-11-26 19:51:42.505608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:51.899 [2024-11-26 19:51:42.505614] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.899 "name": "raid_bdev1", 00:10:51.899 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:51.899 "strip_size_kb": 0, 00:10:51.899 "state": "online", 00:10:51.899 "raid_level": "raid1", 00:10:51.899 "superblock": true, 00:10:51.899 "num_base_bdevs": 4, 00:10:51.899 "num_base_bdevs_discovered": 2, 00:10:51.899 "num_base_bdevs_operational": 2, 00:10:51.899 "base_bdevs_list": [ 00:10:51.899 { 00:10:51.899 "name": null, 00:10:51.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.899 "is_configured": false, 00:10:51.899 "data_offset": 0, 00:10:51.899 "data_size": 63488 00:10:51.899 }, 00:10:51.899 { 00:10:51.899 "name": null, 00:10:51.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.899 "is_configured": false, 00:10:51.899 "data_offset": 2048, 00:10:51.899 "data_size": 63488 00:10:51.899 }, 00:10:51.899 { 00:10:51.899 "name": "BaseBdev3", 00:10:51.899 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:51.899 "is_configured": true, 00:10:51.899 "data_offset": 2048, 00:10:51.899 "data_size": 63488 00:10:51.899 }, 00:10:51.899 { 00:10:51.899 "name": "BaseBdev4", 00:10:51.899 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:51.899 "is_configured": true, 00:10:51.899 "data_offset": 2048, 00:10:51.899 "data_size": 63488 00:10:51.899 } 00:10:51.899 ] 00:10:51.899 }' 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.899 [2024-11-26 19:51:42.813630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:51.899 [2024-11-26 19:51:42.813694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.899 [2024-11-26 19:51:42.813719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:10:51.899 [2024-11-26 19:51:42.813726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.899 [2024-11-26 19:51:42.814114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.899 [2024-11-26 19:51:42.814126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:51.899 [2024-11-26 19:51:42.814195] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:51.899 [2024-11-26 19:51:42.814205] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:10:51.899 [2024-11-26 19:51:42.814216] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:51.899 [2024-11-26 19:51:42.814234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:51.899 [2024-11-26 19:51:42.821640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:10:51.899 spare 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.899 19:51:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:10:51.899 [2024-11-26 19:51:42.823197] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:53.272 19:51:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:53.272 19:51:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:53.272 19:51:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:53.272 19:51:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:53.272 19:51:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:53.272 19:51:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.273 19:51:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.273 19:51:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.273 19:51:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.273 19:51:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.273 19:51:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:53.273 "name": "raid_bdev1", 00:10:53.273 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:53.273 "strip_size_kb": 0, 00:10:53.273 "state": "online", 00:10:53.273 "raid_level": "raid1", 00:10:53.273 "superblock": true, 00:10:53.273 "num_base_bdevs": 4, 00:10:53.273 "num_base_bdevs_discovered": 3, 00:10:53.273 "num_base_bdevs_operational": 3, 00:10:53.273 "process": { 00:10:53.273 "type": "rebuild", 00:10:53.273 "target": "spare", 00:10:53.273 "progress": { 00:10:53.273 "blocks": 20480, 00:10:53.273 "percent": 32 00:10:53.273 } 00:10:53.273 }, 00:10:53.273 "base_bdevs_list": [ 00:10:53.273 { 00:10:53.273 "name": "spare", 00:10:53.273 "uuid": "6e6e17bd-38f6-5005-a53f-d99a4022e157", 00:10:53.273 "is_configured": true, 00:10:53.273 "data_offset": 2048, 00:10:53.273 "data_size": 63488 00:10:53.273 }, 00:10:53.273 { 00:10:53.273 "name": null, 00:10:53.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.273 "is_configured": false, 00:10:53.273 "data_offset": 2048, 00:10:53.273 "data_size": 63488 00:10:53.273 }, 00:10:53.273 { 00:10:53.273 "name": "BaseBdev3", 00:10:53.273 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:53.273 "is_configured": true, 00:10:53.273 "data_offset": 2048, 00:10:53.273 "data_size": 63488 00:10:53.273 }, 00:10:53.273 { 00:10:53.273 "name": "BaseBdev4", 00:10:53.273 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:53.273 "is_configured": true, 00:10:53.273 "data_offset": 2048, 00:10:53.273 "data_size": 63488 00:10:53.273 } 00:10:53.273 ] 00:10:53.273 }' 00:10:53.273 19:51:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:53.273 19:51:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:53.273 19:51:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:53.273 19:51:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:53.273 19:51:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:10:53.273 19:51:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.273 19:51:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.273 [2024-11-26 19:51:43.937617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:53.273 [2024-11-26 19:51:44.028990] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:53.273 [2024-11-26 19:51:44.029224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.273 [2024-11-26 19:51:44.029320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:53.273 [2024-11-26 19:51:44.029354] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.273 "name": "raid_bdev1", 00:10:53.273 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:53.273 "strip_size_kb": 0, 00:10:53.273 "state": "online", 00:10:53.273 "raid_level": "raid1", 00:10:53.273 "superblock": true, 00:10:53.273 "num_base_bdevs": 4, 00:10:53.273 "num_base_bdevs_discovered": 2, 00:10:53.273 "num_base_bdevs_operational": 2, 00:10:53.273 "base_bdevs_list": [ 00:10:53.273 { 00:10:53.273 "name": null, 00:10:53.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.273 "is_configured": false, 00:10:53.273 "data_offset": 0, 00:10:53.273 "data_size": 63488 00:10:53.273 }, 00:10:53.273 { 00:10:53.273 "name": null, 00:10:53.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.273 "is_configured": false, 00:10:53.273 "data_offset": 2048, 00:10:53.273 "data_size": 63488 00:10:53.273 }, 00:10:53.273 { 00:10:53.273 "name": "BaseBdev3", 00:10:53.273 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:53.273 "is_configured": true, 00:10:53.273 "data_offset": 2048, 00:10:53.273 "data_size": 63488 00:10:53.273 }, 00:10:53.273 { 00:10:53.273 "name": "BaseBdev4", 00:10:53.273 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:53.273 "is_configured": true, 00:10:53.273 "data_offset": 2048, 00:10:53.273 "data_size": 63488 00:10:53.273 } 00:10:53.273 ] 00:10:53.273 }' 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.273 19:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:53.532 "name": "raid_bdev1", 00:10:53.532 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:53.532 "strip_size_kb": 0, 00:10:53.532 "state": "online", 00:10:53.532 "raid_level": "raid1", 00:10:53.532 "superblock": true, 00:10:53.532 "num_base_bdevs": 4, 00:10:53.532 "num_base_bdevs_discovered": 2, 00:10:53.532 "num_base_bdevs_operational": 2, 00:10:53.532 "base_bdevs_list": [ 00:10:53.532 { 00:10:53.532 "name": null, 00:10:53.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.532 "is_configured": false, 00:10:53.532 "data_offset": 0, 00:10:53.532 "data_size": 63488 00:10:53.532 }, 00:10:53.532 { 00:10:53.532 "name": null, 00:10:53.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.532 "is_configured": false, 00:10:53.532 "data_offset": 2048, 00:10:53.532 "data_size": 63488 00:10:53.532 }, 00:10:53.532 { 00:10:53.532 "name": "BaseBdev3", 00:10:53.532 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:53.532 "is_configured": true, 00:10:53.532 "data_offset": 2048, 00:10:53.532 "data_size": 63488 00:10:53.532 }, 00:10:53.532 { 00:10:53.532 "name": "BaseBdev4", 00:10:53.532 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:53.532 "is_configured": true, 00:10:53.532 "data_offset": 2048, 00:10:53.532 "data_size": 63488 00:10:53.532 } 00:10:53.532 ] 00:10:53.532 }' 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.532 19:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.532 [2024-11-26 19:51:44.453246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:53.532 [2024-11-26 19:51:44.453301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.532 [2024-11-26 19:51:44.453315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:10:53.532 [2024-11-26 19:51:44.453324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.532 [2024-11-26 19:51:44.453686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.532 [2024-11-26 19:51:44.453707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:53.532 [2024-11-26 19:51:44.453764] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:10:53.533 [2024-11-26 19:51:44.453777] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:10:53.533 [2024-11-26 19:51:44.453783] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:53.533 [2024-11-26 19:51:44.453794] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:10:53.533 BaseBdev1 00:10:53.533 19:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.533 19:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.909 "name": "raid_bdev1", 00:10:54.909 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:54.909 "strip_size_kb": 0, 00:10:54.909 "state": "online", 00:10:54.909 "raid_level": "raid1", 00:10:54.909 "superblock": true, 00:10:54.909 "num_base_bdevs": 4, 00:10:54.909 "num_base_bdevs_discovered": 2, 00:10:54.909 "num_base_bdevs_operational": 2, 00:10:54.909 "base_bdevs_list": [ 00:10:54.909 { 00:10:54.909 "name": null, 00:10:54.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.909 "is_configured": false, 00:10:54.909 "data_offset": 0, 00:10:54.909 "data_size": 63488 00:10:54.909 }, 00:10:54.909 { 00:10:54.909 "name": null, 00:10:54.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.909 "is_configured": false, 00:10:54.909 "data_offset": 2048, 00:10:54.909 "data_size": 63488 00:10:54.909 }, 00:10:54.909 { 00:10:54.909 "name": "BaseBdev3", 00:10:54.909 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:54.909 "is_configured": true, 00:10:54.909 "data_offset": 2048, 00:10:54.909 "data_size": 63488 00:10:54.909 }, 00:10:54.909 { 00:10:54.909 "name": "BaseBdev4", 00:10:54.909 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:54.909 "is_configured": true, 00:10:54.909 "data_offset": 2048, 00:10:54.909 "data_size": 63488 00:10:54.909 } 00:10:54.909 ] 00:10:54.909 }' 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.909 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:54.910 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:54.910 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:54.910 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:54.910 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:54.910 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.910 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.910 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.910 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.910 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.910 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:54.910 "name": "raid_bdev1", 00:10:54.910 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:54.910 "strip_size_kb": 0, 00:10:54.910 "state": "online", 00:10:54.910 "raid_level": "raid1", 00:10:54.910 "superblock": true, 00:10:54.910 "num_base_bdevs": 4, 00:10:54.910 "num_base_bdevs_discovered": 2, 00:10:54.910 "num_base_bdevs_operational": 2, 00:10:54.910 "base_bdevs_list": [ 00:10:54.910 { 00:10:54.910 "name": null, 00:10:54.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.910 "is_configured": false, 00:10:54.910 "data_offset": 0, 00:10:54.910 "data_size": 63488 00:10:54.910 }, 00:10:54.910 { 00:10:54.910 "name": null, 00:10:54.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.910 "is_configured": false, 00:10:54.910 "data_offset": 2048, 00:10:54.910 "data_size": 63488 00:10:54.910 }, 00:10:54.910 { 00:10:54.910 "name": "BaseBdev3", 00:10:54.910 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:54.910 "is_configured": true, 00:10:54.910 "data_offset": 2048, 00:10:54.910 "data_size": 63488 00:10:54.910 }, 00:10:54.910 { 00:10:54.910 "name": "BaseBdev4", 00:10:54.910 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:54.910 "is_configured": true, 00:10:54.910 "data_offset": 2048, 00:10:54.910 "data_size": 63488 00:10:54.910 } 00:10:54.910 ] 00:10:54.910 }' 00:10:54.910 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:55.168 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:55.168 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:55.168 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:55.168 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:55.168 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:10:55.168 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:55.168 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:55.168 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.168 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:55.169 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.169 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:55.169 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.169 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.169 [2024-11-26 19:51:45.909568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.169 [2024-11-26 19:51:45.909738] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:10:55.169 [2024-11-26 19:51:45.909749] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:55.169 request: 00:10:55.169 { 00:10:55.169 "base_bdev": "BaseBdev1", 00:10:55.169 "raid_bdev": "raid_bdev1", 00:10:55.169 "method": "bdev_raid_add_base_bdev", 00:10:55.169 "req_id": 1 00:10:55.169 } 00:10:55.169 Got JSON-RPC error response 00:10:55.169 response: 00:10:55.169 { 00:10:55.169 "code": -22, 00:10:55.169 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:10:55.169 } 00:10:55.169 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:55.169 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:10:55.169 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:55.169 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:55.169 19:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:55.169 19:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.104 "name": "raid_bdev1", 00:10:56.104 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:56.104 "strip_size_kb": 0, 00:10:56.104 "state": "online", 00:10:56.104 "raid_level": "raid1", 00:10:56.104 "superblock": true, 00:10:56.104 "num_base_bdevs": 4, 00:10:56.104 "num_base_bdevs_discovered": 2, 00:10:56.104 "num_base_bdevs_operational": 2, 00:10:56.104 "base_bdevs_list": [ 00:10:56.104 { 00:10:56.104 "name": null, 00:10:56.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.104 "is_configured": false, 00:10:56.104 "data_offset": 0, 00:10:56.104 "data_size": 63488 00:10:56.104 }, 00:10:56.104 { 00:10:56.104 "name": null, 00:10:56.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.104 "is_configured": false, 00:10:56.104 "data_offset": 2048, 00:10:56.104 "data_size": 63488 00:10:56.104 }, 00:10:56.104 { 00:10:56.104 "name": "BaseBdev3", 00:10:56.104 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:56.104 "is_configured": true, 00:10:56.104 "data_offset": 2048, 00:10:56.104 "data_size": 63488 00:10:56.104 }, 00:10:56.104 { 00:10:56.104 "name": "BaseBdev4", 00:10:56.104 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:56.104 "is_configured": true, 00:10:56.104 "data_offset": 2048, 00:10:56.104 "data_size": 63488 00:10:56.104 } 00:10:56.104 ] 00:10:56.104 }' 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.104 19:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.362 19:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:56.362 19:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:56.362 19:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:56.362 19:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:56.362 19:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:56.362 19:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.362 19:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.362 19:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.362 19:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.362 19:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.362 19:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:56.362 "name": "raid_bdev1", 00:10:56.362 "uuid": "6de612ee-ce67-4bcf-9c08-9b00092a4550", 00:10:56.362 "strip_size_kb": 0, 00:10:56.362 "state": "online", 00:10:56.362 "raid_level": "raid1", 00:10:56.362 "superblock": true, 00:10:56.362 "num_base_bdevs": 4, 00:10:56.362 "num_base_bdevs_discovered": 2, 00:10:56.362 "num_base_bdevs_operational": 2, 00:10:56.362 "base_bdevs_list": [ 00:10:56.362 { 00:10:56.362 "name": null, 00:10:56.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.362 "is_configured": false, 00:10:56.362 "data_offset": 0, 00:10:56.362 "data_size": 63488 00:10:56.362 }, 00:10:56.362 { 00:10:56.362 "name": null, 00:10:56.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.362 "is_configured": false, 00:10:56.362 "data_offset": 2048, 00:10:56.362 "data_size": 63488 00:10:56.362 }, 00:10:56.362 { 00:10:56.362 "name": "BaseBdev3", 00:10:56.362 "uuid": "e5d88960-3002-5734-bb7e-3f7745a557e7", 00:10:56.362 "is_configured": true, 00:10:56.362 "data_offset": 2048, 00:10:56.362 "data_size": 63488 00:10:56.362 }, 00:10:56.362 { 00:10:56.362 "name": "BaseBdev4", 00:10:56.362 "uuid": "bab8ff54-57f7-5b87-b948-aeef6515df97", 00:10:56.362 "is_configured": true, 00:10:56.362 "data_offset": 2048, 00:10:56.362 "data_size": 63488 00:10:56.362 } 00:10:56.362 ] 00:10:56.362 }' 00:10:56.362 19:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:56.362 19:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:56.362 19:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:56.620 19:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:56.620 19:51:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75846 00:10:56.620 19:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75846 ']' 00:10:56.620 19:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75846 00:10:56.620 19:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:56.620 19:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.620 19:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75846 00:10:56.620 killing process with pid 75846 00:10:56.620 Received shutdown signal, test time was about 60.000000 seconds 00:10:56.620 00:10:56.620 Latency(us) 00:10:56.620 [2024-11-26T19:51:47.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.620 [2024-11-26T19:51:47.557Z] =================================================================================================================== 00:10:56.620 [2024-11-26T19:51:47.557Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:56.620 19:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.620 19:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.620 19:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75846' 00:10:56.620 19:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75846 00:10:56.620 [2024-11-26 19:51:47.344550] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.620 19:51:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75846 00:10:56.620 [2024-11-26 19:51:47.344648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.620 [2024-11-26 19:51:47.344702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.620 [2024-11-26 19:51:47.344711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:56.878 [2024-11-26 19:51:47.580090] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.445 19:51:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:10:57.445 00:10:57.446 real 0m21.931s 00:10:57.446 user 0m25.790s 00:10:57.446 sys 0m3.089s 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.446 ************************************ 00:10:57.446 END TEST raid_rebuild_test_sb 00:10:57.446 ************************************ 00:10:57.446 19:51:48 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:10:57.446 19:51:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:57.446 19:51:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.446 19:51:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.446 ************************************ 00:10:57.446 START TEST raid_rebuild_test_io 00:10:57.446 ************************************ 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:57.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76579 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76579 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76579 ']' 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:57.446 19:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:57.446 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:57.446 Zero copy mechanism will not be used. 00:10:57.446 [2024-11-26 19:51:48.269171] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:10:57.446 [2024-11-26 19:51:48.269285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76579 ] 00:10:57.704 [2024-11-26 19:51:48.419546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.704 [2024-11-26 19:51:48.518417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.963 [2024-11-26 19:51:48.652798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.963 [2024-11-26 19:51:48.652835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.222 BaseBdev1_malloc 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.222 [2024-11-26 19:51:49.117817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:58.222 [2024-11-26 19:51:49.117886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.222 [2024-11-26 19:51:49.117907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:58.222 [2024-11-26 19:51:49.117918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.222 [2024-11-26 19:51:49.120059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.222 [2024-11-26 19:51:49.120257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:58.222 BaseBdev1 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.222 BaseBdev2_malloc 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.222 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.479 [2024-11-26 19:51:49.157514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:58.479 [2024-11-26 19:51:49.157570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.479 [2024-11-26 19:51:49.157589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:58.479 [2024-11-26 19:51:49.157600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.479 [2024-11-26 19:51:49.159698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.479 [2024-11-26 19:51:49.159733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:58.479 BaseBdev2 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.479 BaseBdev3_malloc 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.479 [2024-11-26 19:51:49.208642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:10:58.479 [2024-11-26 19:51:49.208693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.479 [2024-11-26 19:51:49.208714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:58.479 [2024-11-26 19:51:49.208724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.479 [2024-11-26 19:51:49.210776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.479 [2024-11-26 19:51:49.210812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:58.479 BaseBdev3 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.479 BaseBdev4_malloc 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.479 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.480 [2024-11-26 19:51:49.244147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:10:58.480 [2024-11-26 19:51:49.244196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.480 [2024-11-26 19:51:49.244211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:58.480 [2024-11-26 19:51:49.244221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.480 [2024-11-26 19:51:49.246248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.480 [2024-11-26 19:51:49.246410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:58.480 BaseBdev4 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.480 spare_malloc 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.480 spare_delay 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.480 [2024-11-26 19:51:49.287659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:58.480 [2024-11-26 19:51:49.287708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.480 [2024-11-26 19:51:49.287725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:58.480 [2024-11-26 19:51:49.287737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.480 [2024-11-26 19:51:49.289824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.480 [2024-11-26 19:51:49.289976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:58.480 spare 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.480 [2024-11-26 19:51:49.295710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.480 [2024-11-26 19:51:49.297622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.480 [2024-11-26 19:51:49.297745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.480 [2024-11-26 19:51:49.297818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:58.480 [2024-11-26 19:51:49.297916] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:58.480 [2024-11-26 19:51:49.297986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:58.480 [2024-11-26 19:51:49.298286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:58.480 [2024-11-26 19:51:49.298517] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:58.480 [2024-11-26 19:51:49.298582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:58.480 [2024-11-26 19:51:49.299588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.480 "name": "raid_bdev1", 00:10:58.480 "uuid": "03b8a56b-9cf6-4208-9012-efbe2cb7e053", 00:10:58.480 "strip_size_kb": 0, 00:10:58.480 "state": "online", 00:10:58.480 "raid_level": "raid1", 00:10:58.480 "superblock": false, 00:10:58.480 "num_base_bdevs": 4, 00:10:58.480 "num_base_bdevs_discovered": 4, 00:10:58.480 "num_base_bdevs_operational": 4, 00:10:58.480 "base_bdevs_list": [ 00:10:58.480 { 00:10:58.480 "name": "BaseBdev1", 00:10:58.480 "uuid": "80c33e91-f37a-57dd-ac4a-b9d34d25501f", 00:10:58.480 "is_configured": true, 00:10:58.480 "data_offset": 0, 00:10:58.480 "data_size": 65536 00:10:58.480 }, 00:10:58.480 { 00:10:58.480 "name": "BaseBdev2", 00:10:58.480 "uuid": "ff9f6292-0e84-52fa-9e5d-d37fbbee60db", 00:10:58.480 "is_configured": true, 00:10:58.480 "data_offset": 0, 00:10:58.480 "data_size": 65536 00:10:58.480 }, 00:10:58.480 { 00:10:58.480 "name": "BaseBdev3", 00:10:58.480 "uuid": "5b05a59d-713a-5936-b6a6-0ad59326e03c", 00:10:58.480 "is_configured": true, 00:10:58.480 "data_offset": 0, 00:10:58.480 "data_size": 65536 00:10:58.480 }, 00:10:58.480 { 00:10:58.480 "name": "BaseBdev4", 00:10:58.480 "uuid": "4f7dfc45-ca32-5607-b378-615913370daa", 00:10:58.480 "is_configured": true, 00:10:58.480 "data_offset": 0, 00:10:58.480 "data_size": 65536 00:10:58.480 } 00:10:58.480 ] 00:10:58.480 }' 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.480 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.736 [2024-11-26 19:51:49.616295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:58.736 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:59.017 [2024-11-26 19:51:49.671992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.017 "name": "raid_bdev1", 00:10:59.017 "uuid": "03b8a56b-9cf6-4208-9012-efbe2cb7e053", 00:10:59.017 "strip_size_kb": 0, 00:10:59.017 "state": "online", 00:10:59.017 "raid_level": "raid1", 00:10:59.017 "superblock": false, 00:10:59.017 "num_base_bdevs": 4, 00:10:59.017 "num_base_bdevs_discovered": 3, 00:10:59.017 "num_base_bdevs_operational": 3, 00:10:59.017 "base_bdevs_list": [ 00:10:59.017 { 00:10:59.017 "name": null, 00:10:59.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.017 "is_configured": false, 00:10:59.017 "data_offset": 0, 00:10:59.017 "data_size": 65536 00:10:59.017 }, 00:10:59.017 { 00:10:59.017 "name": "BaseBdev2", 00:10:59.017 "uuid": "ff9f6292-0e84-52fa-9e5d-d37fbbee60db", 00:10:59.017 "is_configured": true, 00:10:59.017 "data_offset": 0, 00:10:59.017 "data_size": 65536 00:10:59.017 }, 00:10:59.017 { 00:10:59.017 "name": "BaseBdev3", 00:10:59.017 "uuid": "5b05a59d-713a-5936-b6a6-0ad59326e03c", 00:10:59.017 "is_configured": true, 00:10:59.017 "data_offset": 0, 00:10:59.017 "data_size": 65536 00:10:59.017 }, 00:10:59.017 { 00:10:59.017 "name": "BaseBdev4", 00:10:59.017 "uuid": "4f7dfc45-ca32-5607-b378-615913370daa", 00:10:59.017 "is_configured": true, 00:10:59.017 "data_offset": 0, 00:10:59.017 "data_size": 65536 00:10:59.017 } 00:10:59.017 ] 00:10:59.017 }' 00:10:59.017 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.018 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:59.018 [2024-11-26 19:51:49.760461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:59.018 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:59.018 Zero copy mechanism will not be used. 00:10:59.018 Running I/O for 60 seconds... 00:10:59.274 19:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:59.274 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.274 19:51:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:59.274 [2024-11-26 19:51:49.996508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:59.274 19:51:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.274 19:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:59.274 [2024-11-26 19:51:50.056858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:59.274 [2024-11-26 19:51:50.058495] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:59.274 [2024-11-26 19:51:50.166003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:59.275 [2024-11-26 19:51:50.166384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:59.531 [2024-11-26 19:51:50.374360] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:59.531 [2024-11-26 19:51:50.374884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:00.094 [2024-11-26 19:51:50.738774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:00.094 164.00 IOPS, 492.00 MiB/s [2024-11-26T19:51:51.031Z] [2024-11-26 19:51:50.955074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:00.094 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:00.094 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:00.094 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:00.351 "name": "raid_bdev1", 00:11:00.351 "uuid": "03b8a56b-9cf6-4208-9012-efbe2cb7e053", 00:11:00.351 "strip_size_kb": 0, 00:11:00.351 "state": "online", 00:11:00.351 "raid_level": "raid1", 00:11:00.351 "superblock": false, 00:11:00.351 "num_base_bdevs": 4, 00:11:00.351 "num_base_bdevs_discovered": 4, 00:11:00.351 "num_base_bdevs_operational": 4, 00:11:00.351 "process": { 00:11:00.351 "type": "rebuild", 00:11:00.351 "target": "spare", 00:11:00.351 "progress": { 00:11:00.351 "blocks": 10240, 00:11:00.351 "percent": 15 00:11:00.351 } 00:11:00.351 }, 00:11:00.351 "base_bdevs_list": [ 00:11:00.351 { 00:11:00.351 "name": "spare", 00:11:00.351 "uuid": "93c0c58b-22cb-5606-bf6e-7cce45b0e0ea", 00:11:00.351 "is_configured": true, 00:11:00.351 "data_offset": 0, 00:11:00.351 "data_size": 65536 00:11:00.351 }, 00:11:00.351 { 00:11:00.351 "name": "BaseBdev2", 00:11:00.351 "uuid": "ff9f6292-0e84-52fa-9e5d-d37fbbee60db", 00:11:00.351 "is_configured": true, 00:11:00.351 "data_offset": 0, 00:11:00.351 "data_size": 65536 00:11:00.351 }, 00:11:00.351 { 00:11:00.351 "name": "BaseBdev3", 00:11:00.351 "uuid": "5b05a59d-713a-5936-b6a6-0ad59326e03c", 00:11:00.351 "is_configured": true, 00:11:00.351 "data_offset": 0, 00:11:00.351 "data_size": 65536 00:11:00.351 }, 00:11:00.351 { 00:11:00.351 "name": "BaseBdev4", 00:11:00.351 "uuid": "4f7dfc45-ca32-5607-b378-615913370daa", 00:11:00.351 "is_configured": true, 00:11:00.351 "data_offset": 0, 00:11:00.351 "data_size": 65536 00:11:00.351 } 00:11:00.351 ] 00:11:00.351 }' 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.351 [2024-11-26 19:51:51.119548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:00.351 [2024-11-26 19:51:51.175210] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:00.351 [2024-11-26 19:51:51.189963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.351 [2024-11-26 19:51:51.190133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:00.351 [2024-11-26 19:51:51.190150] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:00.351 [2024-11-26 19:51:51.209412] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.351 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.352 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.352 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.352 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.352 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.352 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.352 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.352 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.352 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.352 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.352 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.352 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.352 "name": "raid_bdev1", 00:11:00.352 "uuid": "03b8a56b-9cf6-4208-9012-efbe2cb7e053", 00:11:00.352 "strip_size_kb": 0, 00:11:00.352 "state": "online", 00:11:00.352 "raid_level": "raid1", 00:11:00.352 "superblock": false, 00:11:00.352 "num_base_bdevs": 4, 00:11:00.352 "num_base_bdevs_discovered": 3, 00:11:00.352 "num_base_bdevs_operational": 3, 00:11:00.352 "base_bdevs_list": [ 00:11:00.352 { 00:11:00.352 "name": null, 00:11:00.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.352 "is_configured": false, 00:11:00.352 "data_offset": 0, 00:11:00.352 "data_size": 65536 00:11:00.352 }, 00:11:00.352 { 00:11:00.352 "name": "BaseBdev2", 00:11:00.352 "uuid": "ff9f6292-0e84-52fa-9e5d-d37fbbee60db", 00:11:00.352 "is_configured": true, 00:11:00.352 "data_offset": 0, 00:11:00.352 "data_size": 65536 00:11:00.352 }, 00:11:00.352 { 00:11:00.352 "name": "BaseBdev3", 00:11:00.352 "uuid": "5b05a59d-713a-5936-b6a6-0ad59326e03c", 00:11:00.352 "is_configured": true, 00:11:00.352 "data_offset": 0, 00:11:00.352 "data_size": 65536 00:11:00.352 }, 00:11:00.352 { 00:11:00.352 "name": "BaseBdev4", 00:11:00.352 "uuid": "4f7dfc45-ca32-5607-b378-615913370daa", 00:11:00.352 "is_configured": true, 00:11:00.352 "data_offset": 0, 00:11:00.352 "data_size": 65536 00:11:00.352 } 00:11:00.352 ] 00:11:00.352 }' 00:11:00.352 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.352 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.608 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:00.608 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:00.608 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:00.608 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:00.608 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:00.608 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.608 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.608 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.608 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.866 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.866 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:00.866 "name": "raid_bdev1", 00:11:00.866 "uuid": "03b8a56b-9cf6-4208-9012-efbe2cb7e053", 00:11:00.866 "strip_size_kb": 0, 00:11:00.866 "state": "online", 00:11:00.866 "raid_level": "raid1", 00:11:00.866 "superblock": false, 00:11:00.866 "num_base_bdevs": 4, 00:11:00.866 "num_base_bdevs_discovered": 3, 00:11:00.866 "num_base_bdevs_operational": 3, 00:11:00.866 "base_bdevs_list": [ 00:11:00.866 { 00:11:00.866 "name": null, 00:11:00.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.866 "is_configured": false, 00:11:00.866 "data_offset": 0, 00:11:00.866 "data_size": 65536 00:11:00.866 }, 00:11:00.866 { 00:11:00.866 "name": "BaseBdev2", 00:11:00.866 "uuid": "ff9f6292-0e84-52fa-9e5d-d37fbbee60db", 00:11:00.866 "is_configured": true, 00:11:00.866 "data_offset": 0, 00:11:00.866 "data_size": 65536 00:11:00.866 }, 00:11:00.866 { 00:11:00.866 "name": "BaseBdev3", 00:11:00.866 "uuid": "5b05a59d-713a-5936-b6a6-0ad59326e03c", 00:11:00.866 "is_configured": true, 00:11:00.866 "data_offset": 0, 00:11:00.866 "data_size": 65536 00:11:00.866 }, 00:11:00.866 { 00:11:00.866 "name": "BaseBdev4", 00:11:00.866 "uuid": "4f7dfc45-ca32-5607-b378-615913370daa", 00:11:00.866 "is_configured": true, 00:11:00.866 "data_offset": 0, 00:11:00.866 "data_size": 65536 00:11:00.866 } 00:11:00.866 ] 00:11:00.866 }' 00:11:00.866 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:00.866 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:00.866 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:00.866 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:00.866 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:00.866 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.866 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.866 [2024-11-26 19:51:51.641731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:00.866 19:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.866 19:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:00.866 [2024-11-26 19:51:51.695072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:00.866 [2024-11-26 19:51:51.696653] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:01.123 165.50 IOPS, 496.50 MiB/s [2024-11-26T19:51:52.060Z] [2024-11-26 19:51:51.809012] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:01.123 [2024-11-26 19:51:51.810004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:01.123 [2024-11-26 19:51:52.030718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:01.123 [2024-11-26 19:51:52.031108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:01.687 [2024-11-26 19:51:52.353562] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:01.687 [2024-11-26 19:51:52.354112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:01.687 [2024-11-26 19:51:52.482490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:01.945 "name": "raid_bdev1", 00:11:01.945 "uuid": "03b8a56b-9cf6-4208-9012-efbe2cb7e053", 00:11:01.945 "strip_size_kb": 0, 00:11:01.945 "state": "online", 00:11:01.945 "raid_level": "raid1", 00:11:01.945 "superblock": false, 00:11:01.945 "num_base_bdevs": 4, 00:11:01.945 "num_base_bdevs_discovered": 4, 00:11:01.945 "num_base_bdevs_operational": 4, 00:11:01.945 "process": { 00:11:01.945 "type": "rebuild", 00:11:01.945 "target": "spare", 00:11:01.945 "progress": { 00:11:01.945 "blocks": 12288, 00:11:01.945 "percent": 18 00:11:01.945 } 00:11:01.945 }, 00:11:01.945 "base_bdevs_list": [ 00:11:01.945 { 00:11:01.945 "name": "spare", 00:11:01.945 "uuid": "93c0c58b-22cb-5606-bf6e-7cce45b0e0ea", 00:11:01.945 "is_configured": true, 00:11:01.945 "data_offset": 0, 00:11:01.945 "data_size": 65536 00:11:01.945 }, 00:11:01.945 { 00:11:01.945 "name": "BaseBdev2", 00:11:01.945 "uuid": "ff9f6292-0e84-52fa-9e5d-d37fbbee60db", 00:11:01.945 "is_configured": true, 00:11:01.945 "data_offset": 0, 00:11:01.945 "data_size": 65536 00:11:01.945 }, 00:11:01.945 { 00:11:01.945 "name": "BaseBdev3", 00:11:01.945 "uuid": "5b05a59d-713a-5936-b6a6-0ad59326e03c", 00:11:01.945 "is_configured": true, 00:11:01.945 "data_offset": 0, 00:11:01.945 "data_size": 65536 00:11:01.945 }, 00:11:01.945 { 00:11:01.945 "name": "BaseBdev4", 00:11:01.945 "uuid": "4f7dfc45-ca32-5607-b378-615913370daa", 00:11:01.945 "is_configured": true, 00:11:01.945 "data_offset": 0, 00:11:01.945 "data_size": 65536 00:11:01.945 } 00:11:01.945 ] 00:11:01.945 }' 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:01.945 [2024-11-26 19:51:52.716843] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:01.945 137.67 IOPS, 413.00 MiB/s [2024-11-26T19:51:52.882Z] 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.945 19:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.945 [2024-11-26 19:51:52.775578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:01.945 [2024-11-26 19:51:52.838803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:01.945 [2024-11-26 19:51:52.839321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:02.203 [2024-11-26 19:51:52.952115] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:11:02.203 [2024-11-26 19:51:52.952146] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:02.203 "name": "raid_bdev1", 00:11:02.203 "uuid": "03b8a56b-9cf6-4208-9012-efbe2cb7e053", 00:11:02.203 "strip_size_kb": 0, 00:11:02.203 "state": "online", 00:11:02.203 "raid_level": "raid1", 00:11:02.203 "superblock": false, 00:11:02.203 "num_base_bdevs": 4, 00:11:02.203 "num_base_bdevs_discovered": 3, 00:11:02.203 "num_base_bdevs_operational": 3, 00:11:02.203 "process": { 00:11:02.203 "type": "rebuild", 00:11:02.203 "target": "spare", 00:11:02.203 "progress": { 00:11:02.203 "blocks": 16384, 00:11:02.203 "percent": 25 00:11:02.203 } 00:11:02.203 }, 00:11:02.203 "base_bdevs_list": [ 00:11:02.203 { 00:11:02.203 "name": "spare", 00:11:02.203 "uuid": "93c0c58b-22cb-5606-bf6e-7cce45b0e0ea", 00:11:02.203 "is_configured": true, 00:11:02.203 "data_offset": 0, 00:11:02.203 "data_size": 65536 00:11:02.203 }, 00:11:02.203 { 00:11:02.203 "name": null, 00:11:02.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.203 "is_configured": false, 00:11:02.203 "data_offset": 0, 00:11:02.203 "data_size": 65536 00:11:02.203 }, 00:11:02.203 { 00:11:02.203 "name": "BaseBdev3", 00:11:02.203 "uuid": "5b05a59d-713a-5936-b6a6-0ad59326e03c", 00:11:02.203 "is_configured": true, 00:11:02.203 "data_offset": 0, 00:11:02.203 "data_size": 65536 00:11:02.203 }, 00:11:02.203 { 00:11:02.203 "name": "BaseBdev4", 00:11:02.203 "uuid": "4f7dfc45-ca32-5607-b378-615913370daa", 00:11:02.203 "is_configured": true, 00:11:02.203 "data_offset": 0, 00:11:02.203 "data_size": 65536 00:11:02.203 } 00:11:02.203 ] 00:11:02.203 }' 00:11:02.203 19:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=383 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.203 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:02.203 "name": "raid_bdev1", 00:11:02.203 "uuid": "03b8a56b-9cf6-4208-9012-efbe2cb7e053", 00:11:02.203 "strip_size_kb": 0, 00:11:02.203 "state": "online", 00:11:02.204 "raid_level": "raid1", 00:11:02.204 "superblock": false, 00:11:02.204 "num_base_bdevs": 4, 00:11:02.204 "num_base_bdevs_discovered": 3, 00:11:02.204 "num_base_bdevs_operational": 3, 00:11:02.204 "process": { 00:11:02.204 "type": "rebuild", 00:11:02.204 "target": "spare", 00:11:02.204 "progress": { 00:11:02.204 "blocks": 16384, 00:11:02.204 "percent": 25 00:11:02.204 } 00:11:02.204 }, 00:11:02.204 "base_bdevs_list": [ 00:11:02.204 { 00:11:02.204 "name": "spare", 00:11:02.204 "uuid": "93c0c58b-22cb-5606-bf6e-7cce45b0e0ea", 00:11:02.204 "is_configured": true, 00:11:02.204 "data_offset": 0, 00:11:02.204 "data_size": 65536 00:11:02.204 }, 00:11:02.204 { 00:11:02.204 "name": null, 00:11:02.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.204 "is_configured": false, 00:11:02.204 "data_offset": 0, 00:11:02.204 "data_size": 65536 00:11:02.204 }, 00:11:02.204 { 00:11:02.204 "name": "BaseBdev3", 00:11:02.204 "uuid": "5b05a59d-713a-5936-b6a6-0ad59326e03c", 00:11:02.204 "is_configured": true, 00:11:02.204 "data_offset": 0, 00:11:02.204 "data_size": 65536 00:11:02.204 }, 00:11:02.204 { 00:11:02.204 "name": "BaseBdev4", 00:11:02.204 "uuid": "4f7dfc45-ca32-5607-b378-615913370daa", 00:11:02.204 "is_configured": true, 00:11:02.204 "data_offset": 0, 00:11:02.204 "data_size": 65536 00:11:02.204 } 00:11:02.204 ] 00:11:02.204 }' 00:11:02.204 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:02.204 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:02.204 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:02.461 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:02.461 19:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:02.461 [2024-11-26 19:51:53.319282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:02.461 [2024-11-26 19:51:53.319461] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:02.719 [2024-11-26 19:51:53.581084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:02.976 117.75 IOPS, 353.25 MiB/s [2024-11-26T19:51:53.913Z] [2024-11-26 19:51:53.795930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:02.976 [2024-11-26 19:51:53.796157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:03.234 [2024-11-26 19:51:54.141609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:03.234 19:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:03.234 19:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:03.234 19:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:03.234 19:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:03.234 19:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:03.234 19:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:03.234 19:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.234 19:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.234 19:51:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.234 19:51:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:03.234 19:51:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.563 19:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:03.563 "name": "raid_bdev1", 00:11:03.563 "uuid": "03b8a56b-9cf6-4208-9012-efbe2cb7e053", 00:11:03.563 "strip_size_kb": 0, 00:11:03.563 "state": "online", 00:11:03.563 "raid_level": "raid1", 00:11:03.563 "superblock": false, 00:11:03.563 "num_base_bdevs": 4, 00:11:03.563 "num_base_bdevs_discovered": 3, 00:11:03.563 "num_base_bdevs_operational": 3, 00:11:03.563 "process": { 00:11:03.563 "type": "rebuild", 00:11:03.563 "target": "spare", 00:11:03.563 "progress": { 00:11:03.563 "blocks": 34816, 00:11:03.563 "percent": 53 00:11:03.563 } 00:11:03.563 }, 00:11:03.563 "base_bdevs_list": [ 00:11:03.563 { 00:11:03.563 "name": "spare", 00:11:03.563 "uuid": "93c0c58b-22cb-5606-bf6e-7cce45b0e0ea", 00:11:03.563 "is_configured": true, 00:11:03.563 "data_offset": 0, 00:11:03.563 "data_size": 65536 00:11:03.563 }, 00:11:03.563 { 00:11:03.563 "name": null, 00:11:03.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.563 "is_configured": false, 00:11:03.563 "data_offset": 0, 00:11:03.563 "data_size": 65536 00:11:03.563 }, 00:11:03.563 { 00:11:03.563 "name": "BaseBdev3", 00:11:03.563 "uuid": "5b05a59d-713a-5936-b6a6-0ad59326e03c", 00:11:03.563 "is_configured": true, 00:11:03.563 "data_offset": 0, 00:11:03.563 "data_size": 65536 00:11:03.563 }, 00:11:03.563 { 00:11:03.563 "name": "BaseBdev4", 00:11:03.563 "uuid": "4f7dfc45-ca32-5607-b378-615913370daa", 00:11:03.563 "is_configured": true, 00:11:03.563 "data_offset": 0, 00:11:03.563 "data_size": 65536 00:11:03.563 } 00:11:03.563 ] 00:11:03.563 }' 00:11:03.563 19:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:03.563 19:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:03.563 19:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:03.563 19:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:03.563 19:51:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:03.825 [2024-11-26 19:51:54.489401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:11:03.825 [2024-11-26 19:51:54.708162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:04.340 99.80 IOPS, 299.40 MiB/s [2024-11-26T19:51:55.277Z] [2024-11-26 19:51:55.046734] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:04.340 19:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:04.340 19:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:04.340 19:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:04.340 19:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:04.340 19:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:04.340 19:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:04.340 19:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.340 19:51:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.340 19:51:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.340 19:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.340 19:51:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.598 19:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:04.598 "name": "raid_bdev1", 00:11:04.598 "uuid": "03b8a56b-9cf6-4208-9012-efbe2cb7e053", 00:11:04.598 "strip_size_kb": 0, 00:11:04.598 "state": "online", 00:11:04.598 "raid_level": "raid1", 00:11:04.598 "superblock": false, 00:11:04.598 "num_base_bdevs": 4, 00:11:04.598 "num_base_bdevs_discovered": 3, 00:11:04.598 "num_base_bdevs_operational": 3, 00:11:04.598 "process": { 00:11:04.598 "type": "rebuild", 00:11:04.598 "target": "spare", 00:11:04.598 "progress": { 00:11:04.598 "blocks": 47104, 00:11:04.598 "percent": 71 00:11:04.598 } 00:11:04.598 }, 00:11:04.598 "base_bdevs_list": [ 00:11:04.598 { 00:11:04.598 "name": "spare", 00:11:04.598 "uuid": "93c0c58b-22cb-5606-bf6e-7cce45b0e0ea", 00:11:04.598 "is_configured": true, 00:11:04.598 "data_offset": 0, 00:11:04.598 "data_size": 65536 00:11:04.598 }, 00:11:04.598 { 00:11:04.598 "name": null, 00:11:04.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.598 "is_configured": false, 00:11:04.598 "data_offset": 0, 00:11:04.598 "data_size": 65536 00:11:04.598 }, 00:11:04.598 { 00:11:04.598 "name": "BaseBdev3", 00:11:04.598 "uuid": "5b05a59d-713a-5936-b6a6-0ad59326e03c", 00:11:04.598 "is_configured": true, 00:11:04.598 "data_offset": 0, 00:11:04.598 "data_size": 65536 00:11:04.598 }, 00:11:04.598 { 00:11:04.598 "name": "BaseBdev4", 00:11:04.598 "uuid": "4f7dfc45-ca32-5607-b378-615913370daa", 00:11:04.598 "is_configured": true, 00:11:04.598 "data_offset": 0, 00:11:04.598 "data_size": 65536 00:11:04.598 } 00:11:04.598 ] 00:11:04.598 }' 00:11:04.598 19:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:04.598 19:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:04.598 19:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:04.598 19:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:04.598 19:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:04.598 [2024-11-26 19:51:55.372082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:05.114 88.83 IOPS, 266.50 MiB/s [2024-11-26T19:51:56.051Z] [2024-11-26 19:51:55.800235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:11:05.373 [2024-11-26 19:51:56.128230] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:05.373 [2024-11-26 19:51:56.233112] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:05.373 [2024-11-26 19:51:56.235840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:05.632 "name": "raid_bdev1", 00:11:05.632 "uuid": "03b8a56b-9cf6-4208-9012-efbe2cb7e053", 00:11:05.632 "strip_size_kb": 0, 00:11:05.632 "state": "online", 00:11:05.632 "raid_level": "raid1", 00:11:05.632 "superblock": false, 00:11:05.632 "num_base_bdevs": 4, 00:11:05.632 "num_base_bdevs_discovered": 3, 00:11:05.632 "num_base_bdevs_operational": 3, 00:11:05.632 "base_bdevs_list": [ 00:11:05.632 { 00:11:05.632 "name": "spare", 00:11:05.632 "uuid": "93c0c58b-22cb-5606-bf6e-7cce45b0e0ea", 00:11:05.632 "is_configured": true, 00:11:05.632 "data_offset": 0, 00:11:05.632 "data_size": 65536 00:11:05.632 }, 00:11:05.632 { 00:11:05.632 "name": null, 00:11:05.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.632 "is_configured": false, 00:11:05.632 "data_offset": 0, 00:11:05.632 "data_size": 65536 00:11:05.632 }, 00:11:05.632 { 00:11:05.632 "name": "BaseBdev3", 00:11:05.632 "uuid": "5b05a59d-713a-5936-b6a6-0ad59326e03c", 00:11:05.632 "is_configured": true, 00:11:05.632 "data_offset": 0, 00:11:05.632 "data_size": 65536 00:11:05.632 }, 00:11:05.632 { 00:11:05.632 "name": "BaseBdev4", 00:11:05.632 "uuid": "4f7dfc45-ca32-5607-b378-615913370daa", 00:11:05.632 "is_configured": true, 00:11:05.632 "data_offset": 0, 00:11:05.632 "data_size": 65536 00:11:05.632 } 00:11:05.632 ] 00:11:05.632 }' 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:05.632 "name": "raid_bdev1", 00:11:05.632 "uuid": "03b8a56b-9cf6-4208-9012-efbe2cb7e053", 00:11:05.632 "strip_size_kb": 0, 00:11:05.632 "state": "online", 00:11:05.632 "raid_level": "raid1", 00:11:05.632 "superblock": false, 00:11:05.632 "num_base_bdevs": 4, 00:11:05.632 "num_base_bdevs_discovered": 3, 00:11:05.632 "num_base_bdevs_operational": 3, 00:11:05.632 "base_bdevs_list": [ 00:11:05.632 { 00:11:05.632 "name": "spare", 00:11:05.632 "uuid": "93c0c58b-22cb-5606-bf6e-7cce45b0e0ea", 00:11:05.632 "is_configured": true, 00:11:05.632 "data_offset": 0, 00:11:05.632 "data_size": 65536 00:11:05.632 }, 00:11:05.632 { 00:11:05.632 "name": null, 00:11:05.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.632 "is_configured": false, 00:11:05.632 "data_offset": 0, 00:11:05.632 "data_size": 65536 00:11:05.632 }, 00:11:05.632 { 00:11:05.632 "name": "BaseBdev3", 00:11:05.632 "uuid": "5b05a59d-713a-5936-b6a6-0ad59326e03c", 00:11:05.632 "is_configured": true, 00:11:05.632 "data_offset": 0, 00:11:05.632 "data_size": 65536 00:11:05.632 }, 00:11:05.632 { 00:11:05.632 "name": "BaseBdev4", 00:11:05.632 "uuid": "4f7dfc45-ca32-5607-b378-615913370daa", 00:11:05.632 "is_configured": true, 00:11:05.632 "data_offset": 0, 00:11:05.632 "data_size": 65536 00:11:05.632 } 00:11:05.632 ] 00:11:05.632 }' 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.632 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.633 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.633 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.633 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.633 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.633 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.633 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.633 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.633 "name": "raid_bdev1", 00:11:05.633 "uuid": "03b8a56b-9cf6-4208-9012-efbe2cb7e053", 00:11:05.633 "strip_size_kb": 0, 00:11:05.633 "state": "online", 00:11:05.633 "raid_level": "raid1", 00:11:05.633 "superblock": false, 00:11:05.633 "num_base_bdevs": 4, 00:11:05.633 "num_base_bdevs_discovered": 3, 00:11:05.633 "num_base_bdevs_operational": 3, 00:11:05.633 "base_bdevs_list": [ 00:11:05.633 { 00:11:05.633 "name": "spare", 00:11:05.633 "uuid": "93c0c58b-22cb-5606-bf6e-7cce45b0e0ea", 00:11:05.633 "is_configured": true, 00:11:05.633 "data_offset": 0, 00:11:05.633 "data_size": 65536 00:11:05.633 }, 00:11:05.633 { 00:11:05.633 "name": null, 00:11:05.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.633 "is_configured": false, 00:11:05.633 "data_offset": 0, 00:11:05.633 "data_size": 65536 00:11:05.633 }, 00:11:05.633 { 00:11:05.633 "name": "BaseBdev3", 00:11:05.633 "uuid": "5b05a59d-713a-5936-b6a6-0ad59326e03c", 00:11:05.633 "is_configured": true, 00:11:05.633 "data_offset": 0, 00:11:05.633 "data_size": 65536 00:11:05.633 }, 00:11:05.633 { 00:11:05.633 "name": "BaseBdev4", 00:11:05.633 "uuid": "4f7dfc45-ca32-5607-b378-615913370daa", 00:11:05.633 "is_configured": true, 00:11:05.633 "data_offset": 0, 00:11:05.633 "data_size": 65536 00:11:05.633 } 00:11:05.633 ] 00:11:05.633 }' 00:11:05.633 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.633 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.149 81.86 IOPS, 245.57 MiB/s [2024-11-26T19:51:57.086Z] 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.149 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.149 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.149 [2024-11-26 19:51:56.833237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.149 [2024-11-26 19:51:56.833271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.149 00:11:06.149 Latency(us) 00:11:06.149 [2024-11-26T19:51:57.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.149 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:06.149 raid_bdev1 : 7.12 81.05 243.15 0.00 0.00 16152.21 237.88 115343.36 00:11:06.149 [2024-11-26T19:51:57.086Z] =================================================================================================================== 00:11:06.149 [2024-11-26T19:51:57.086Z] Total : 81.05 243.15 0.00 0.00 16152.21 237.88 115343.36 00:11:06.149 [2024-11-26 19:51:56.892953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.149 [2024-11-26 19:51:56.892994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.149 [2024-11-26 19:51:56.893080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.149 [2024-11-26 19:51:56.893090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:06.150 { 00:11:06.150 "results": [ 00:11:06.150 { 00:11:06.150 "job": "raid_bdev1", 00:11:06.150 "core_mask": "0x1", 00:11:06.150 "workload": "randrw", 00:11:06.150 "percentage": 50, 00:11:06.150 "status": "finished", 00:11:06.150 "queue_depth": 2, 00:11:06.150 "io_size": 3145728, 00:11:06.150 "runtime": 7.119033, 00:11:06.150 "iops": 81.0503336618892, 00:11:06.150 "mibps": 243.1510009856676, 00:11:06.150 "io_failed": 0, 00:11:06.150 "io_timeout": 0, 00:11:06.150 "avg_latency_us": 16152.212323690173, 00:11:06.150 "min_latency_us": 237.8830769230769, 00:11:06.150 "max_latency_us": 115343.36 00:11:06.150 } 00:11:06.150 ], 00:11:06.150 "core_count": 1 00:11:06.150 } 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:06.150 19:51:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:06.420 /dev/nbd0 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:06.420 1+0 records in 00:11:06.420 1+0 records out 00:11:06.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341715 s, 12.0 MB/s 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:06.420 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:11:06.684 /dev/nbd1 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:06.685 1+0 records in 00:11:06.685 1+0 records out 00:11:06.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311192 s, 13.2 MB/s 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:06.685 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:06.686 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:06.686 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:06.686 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:06.948 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:11:07.205 /dev/nbd1 00:11:07.205 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:07.205 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:07.205 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:07.205 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.206 1+0 records in 00:11:07.206 1+0 records out 00:11:07.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226838 s, 18.1 MB/s 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:07.206 19:51:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:07.206 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:07.206 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:07.206 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:07.206 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:07.206 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:07.206 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:07.206 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:07.464 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:07.464 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:07.464 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:07.464 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:07.464 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:07.464 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:07.464 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:07.464 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:07.464 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:07.465 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:07.465 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:07.465 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:07.465 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:07.465 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:07.465 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76579 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76579 ']' 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76579 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76579 00:11:07.723 killing process with pid 76579 00:11:07.723 Received shutdown signal, test time was about 8.718141 seconds 00:11:07.723 00:11:07.723 Latency(us) 00:11:07.723 [2024-11-26T19:51:58.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.723 [2024-11-26T19:51:58.660Z] =================================================================================================================== 00:11:07.723 [2024-11-26T19:51:58.660Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76579' 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76579 00:11:07.723 [2024-11-26 19:51:58.480298] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.723 19:51:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76579 00:11:07.981 [2024-11-26 19:51:58.683223] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:08.545 00:11:08.545 real 0m11.073s 00:11:08.545 user 0m13.811s 00:11:08.545 sys 0m1.238s 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:08.545 ************************************ 00:11:08.545 END TEST raid_rebuild_test_io 00:11:08.545 ************************************ 00:11:08.545 19:51:59 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:11:08.545 19:51:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:08.545 19:51:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.545 19:51:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.545 ************************************ 00:11:08.545 START TEST raid_rebuild_test_sb_io 00:11:08.545 ************************************ 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76966 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76966 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76966 ']' 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.545 19:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:08.545 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:08.545 Zero copy mechanism will not be used. 00:11:08.545 [2024-11-26 19:51:59.396892] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:11:08.545 [2024-11-26 19:51:59.397025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76966 ] 00:11:08.804 [2024-11-26 19:51:59.555632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.804 [2024-11-26 19:51:59.637835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.074 [2024-11-26 19:51:59.745561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.074 [2024-11-26 19:51:59.745596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.333 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.333 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:11:09.333 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:09.333 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:09.333 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.333 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.594 BaseBdev1_malloc 00:11:09.594 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.594 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:09.594 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.594 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.594 [2024-11-26 19:52:00.278395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:09.594 [2024-11-26 19:52:00.278453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.594 [2024-11-26 19:52:00.278471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:09.594 [2024-11-26 19:52:00.278480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.594 [2024-11-26 19:52:00.280158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.594 [2024-11-26 19:52:00.280192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:09.594 BaseBdev1 00:11:09.594 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.594 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:09.594 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:09.594 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.595 BaseBdev2_malloc 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.595 [2024-11-26 19:52:00.313139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:09.595 [2024-11-26 19:52:00.313185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.595 [2024-11-26 19:52:00.313201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:09.595 [2024-11-26 19:52:00.313210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.595 [2024-11-26 19:52:00.314862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.595 [2024-11-26 19:52:00.314894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:09.595 BaseBdev2 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.595 BaseBdev3_malloc 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.595 [2024-11-26 19:52:00.365300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:09.595 [2024-11-26 19:52:00.365359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.595 [2024-11-26 19:52:00.365376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:09.595 [2024-11-26 19:52:00.365385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.595 [2024-11-26 19:52:00.367041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.595 [2024-11-26 19:52:00.367071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:09.595 BaseBdev3 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.595 BaseBdev4_malloc 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.595 [2024-11-26 19:52:00.400124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:09.595 [2024-11-26 19:52:00.400163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.595 [2024-11-26 19:52:00.400176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:09.595 [2024-11-26 19:52:00.400184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.595 [2024-11-26 19:52:00.401810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.595 [2024-11-26 19:52:00.401839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:09.595 BaseBdev4 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.595 spare_malloc 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.595 spare_delay 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.595 [2024-11-26 19:52:00.446901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:09.595 [2024-11-26 19:52:00.446953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.595 [2024-11-26 19:52:00.446968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:09.595 [2024-11-26 19:52:00.446977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.595 [2024-11-26 19:52:00.448640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.595 [2024-11-26 19:52:00.448670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:09.595 spare 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.595 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.595 [2024-11-26 19:52:00.454950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.595 [2024-11-26 19:52:00.456396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.595 [2024-11-26 19:52:00.456447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.595 [2024-11-26 19:52:00.456486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.595 [2024-11-26 19:52:00.456626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:09.595 [2024-11-26 19:52:00.456636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:09.595 [2024-11-26 19:52:00.456832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:09.595 [2024-11-26 19:52:00.456962] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:09.595 [2024-11-26 19:52:00.456969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:09.596 [2024-11-26 19:52:00.457078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.596 "name": "raid_bdev1", 00:11:09.596 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:09.596 "strip_size_kb": 0, 00:11:09.596 "state": "online", 00:11:09.596 "raid_level": "raid1", 00:11:09.596 "superblock": true, 00:11:09.596 "num_base_bdevs": 4, 00:11:09.596 "num_base_bdevs_discovered": 4, 00:11:09.596 "num_base_bdevs_operational": 4, 00:11:09.596 "base_bdevs_list": [ 00:11:09.596 { 00:11:09.596 "name": "BaseBdev1", 00:11:09.596 "uuid": "7d37544f-de86-5a3f-a4ed-51b3f679878f", 00:11:09.596 "is_configured": true, 00:11:09.596 "data_offset": 2048, 00:11:09.596 "data_size": 63488 00:11:09.596 }, 00:11:09.596 { 00:11:09.596 "name": "BaseBdev2", 00:11:09.596 "uuid": "55d33dec-8b84-5043-8b3e-8c64b48fc13f", 00:11:09.596 "is_configured": true, 00:11:09.596 "data_offset": 2048, 00:11:09.596 "data_size": 63488 00:11:09.596 }, 00:11:09.596 { 00:11:09.596 "name": "BaseBdev3", 00:11:09.596 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:09.596 "is_configured": true, 00:11:09.596 "data_offset": 2048, 00:11:09.596 "data_size": 63488 00:11:09.596 }, 00:11:09.596 { 00:11:09.596 "name": "BaseBdev4", 00:11:09.596 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:09.596 "is_configured": true, 00:11:09.596 "data_offset": 2048, 00:11:09.596 "data_size": 63488 00:11:09.596 } 00:11:09.596 ] 00:11:09.596 }' 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.596 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.856 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.856 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.856 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.856 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:10.115 [2024-11-26 19:52:00.791297] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:10.116 [2024-11-26 19:52:00.859035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.116 "name": "raid_bdev1", 00:11:10.116 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:10.116 "strip_size_kb": 0, 00:11:10.116 "state": "online", 00:11:10.116 "raid_level": "raid1", 00:11:10.116 "superblock": true, 00:11:10.116 "num_base_bdevs": 4, 00:11:10.116 "num_base_bdevs_discovered": 3, 00:11:10.116 "num_base_bdevs_operational": 3, 00:11:10.116 "base_bdevs_list": [ 00:11:10.116 { 00:11:10.116 "name": null, 00:11:10.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.116 "is_configured": false, 00:11:10.116 "data_offset": 0, 00:11:10.116 "data_size": 63488 00:11:10.116 }, 00:11:10.116 { 00:11:10.116 "name": "BaseBdev2", 00:11:10.116 "uuid": "55d33dec-8b84-5043-8b3e-8c64b48fc13f", 00:11:10.116 "is_configured": true, 00:11:10.116 "data_offset": 2048, 00:11:10.116 "data_size": 63488 00:11:10.116 }, 00:11:10.116 { 00:11:10.116 "name": "BaseBdev3", 00:11:10.116 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:10.116 "is_configured": true, 00:11:10.116 "data_offset": 2048, 00:11:10.116 "data_size": 63488 00:11:10.116 }, 00:11:10.116 { 00:11:10.116 "name": "BaseBdev4", 00:11:10.116 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:10.116 "is_configured": true, 00:11:10.116 "data_offset": 2048, 00:11:10.116 "data_size": 63488 00:11:10.116 } 00:11:10.116 ] 00:11:10.116 }' 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.116 19:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:10.116 [2024-11-26 19:52:00.943371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:10.116 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:10.116 Zero copy mechanism will not be used. 00:11:10.116 Running I/O for 60 seconds... 00:11:10.374 19:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:10.374 19:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.374 19:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:10.374 [2024-11-26 19:52:01.182157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:10.375 19:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.375 19:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:10.375 [2024-11-26 19:52:01.223491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:10.375 [2024-11-26 19:52:01.225103] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:10.633 [2024-11-26 19:52:01.338055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:10.633 [2024-11-26 19:52:01.338438] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:10.633 [2024-11-26 19:52:01.541542] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:10.633 [2024-11-26 19:52:01.541767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:10.891 [2024-11-26 19:52:01.789868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:11.215 191.00 IOPS, 573.00 MiB/s [2024-11-26T19:52:02.152Z] [2024-11-26 19:52:01.991856] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:11.215 [2024-11-26 19:52:01.992381] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.482 "name": "raid_bdev1", 00:11:11.482 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:11.482 "strip_size_kb": 0, 00:11:11.482 "state": "online", 00:11:11.482 "raid_level": "raid1", 00:11:11.482 "superblock": true, 00:11:11.482 "num_base_bdevs": 4, 00:11:11.482 "num_base_bdevs_discovered": 4, 00:11:11.482 "num_base_bdevs_operational": 4, 00:11:11.482 "process": { 00:11:11.482 "type": "rebuild", 00:11:11.482 "target": "spare", 00:11:11.482 "progress": { 00:11:11.482 "blocks": 12288, 00:11:11.482 "percent": 19 00:11:11.482 } 00:11:11.482 }, 00:11:11.482 "base_bdevs_list": [ 00:11:11.482 { 00:11:11.482 "name": "spare", 00:11:11.482 "uuid": "bdfe947d-2c90-50b6-956b-d1901e30fb49", 00:11:11.482 "is_configured": true, 00:11:11.482 "data_offset": 2048, 00:11:11.482 "data_size": 63488 00:11:11.482 }, 00:11:11.482 { 00:11:11.482 "name": "BaseBdev2", 00:11:11.482 "uuid": "55d33dec-8b84-5043-8b3e-8c64b48fc13f", 00:11:11.482 "is_configured": true, 00:11:11.482 "data_offset": 2048, 00:11:11.482 "data_size": 63488 00:11:11.482 }, 00:11:11.482 { 00:11:11.482 "name": "BaseBdev3", 00:11:11.482 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:11.482 "is_configured": true, 00:11:11.482 "data_offset": 2048, 00:11:11.482 "data_size": 63488 00:11:11.482 }, 00:11:11.482 { 00:11:11.482 "name": "BaseBdev4", 00:11:11.482 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:11.482 "is_configured": true, 00:11:11.482 "data_offset": 2048, 00:11:11.482 "data_size": 63488 00:11:11.482 } 00:11:11.482 ] 00:11:11.482 }' 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.482 [2024-11-26 19:52:02.317639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:11.482 [2024-11-26 19:52:02.318252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:11.482 [2024-11-26 19:52:02.318546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:11.482 [2024-11-26 19:52:02.324180] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:11.482 [2024-11-26 19:52:02.326587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.482 [2024-11-26 19:52:02.326619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:11.482 [2024-11-26 19:52:02.326627] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:11.482 [2024-11-26 19:52:02.340140] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.482 "name": "raid_bdev1", 00:11:11.482 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:11.482 "strip_size_kb": 0, 00:11:11.482 "state": "online", 00:11:11.482 "raid_level": "raid1", 00:11:11.482 "superblock": true, 00:11:11.482 "num_base_bdevs": 4, 00:11:11.482 "num_base_bdevs_discovered": 3, 00:11:11.482 "num_base_bdevs_operational": 3, 00:11:11.482 "base_bdevs_list": [ 00:11:11.482 { 00:11:11.482 "name": null, 00:11:11.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.482 "is_configured": false, 00:11:11.482 "data_offset": 0, 00:11:11.482 "data_size": 63488 00:11:11.482 }, 00:11:11.482 { 00:11:11.482 "name": "BaseBdev2", 00:11:11.482 "uuid": "55d33dec-8b84-5043-8b3e-8c64b48fc13f", 00:11:11.482 "is_configured": true, 00:11:11.482 "data_offset": 2048, 00:11:11.482 "data_size": 63488 00:11:11.482 }, 00:11:11.482 { 00:11:11.482 "name": "BaseBdev3", 00:11:11.482 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:11.482 "is_configured": true, 00:11:11.482 "data_offset": 2048, 00:11:11.482 "data_size": 63488 00:11:11.482 }, 00:11:11.482 { 00:11:11.482 "name": "BaseBdev4", 00:11:11.482 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:11.482 "is_configured": true, 00:11:11.482 "data_offset": 2048, 00:11:11.482 "data_size": 63488 00:11:11.482 } 00:11:11.482 ] 00:11:11.482 }' 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.482 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.049 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:12.049 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:12.049 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:12.049 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:12.049 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:12.049 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.049 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.049 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.049 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.049 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.049 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:12.049 "name": "raid_bdev1", 00:11:12.049 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:12.049 "strip_size_kb": 0, 00:11:12.049 "state": "online", 00:11:12.049 "raid_level": "raid1", 00:11:12.049 "superblock": true, 00:11:12.049 "num_base_bdevs": 4, 00:11:12.049 "num_base_bdevs_discovered": 3, 00:11:12.049 "num_base_bdevs_operational": 3, 00:11:12.049 "base_bdevs_list": [ 00:11:12.049 { 00:11:12.049 "name": null, 00:11:12.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.049 "is_configured": false, 00:11:12.049 "data_offset": 0, 00:11:12.049 "data_size": 63488 00:11:12.049 }, 00:11:12.049 { 00:11:12.049 "name": "BaseBdev2", 00:11:12.049 "uuid": "55d33dec-8b84-5043-8b3e-8c64b48fc13f", 00:11:12.049 "is_configured": true, 00:11:12.049 "data_offset": 2048, 00:11:12.049 "data_size": 63488 00:11:12.049 }, 00:11:12.049 { 00:11:12.049 "name": "BaseBdev3", 00:11:12.049 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:12.049 "is_configured": true, 00:11:12.049 "data_offset": 2048, 00:11:12.049 "data_size": 63488 00:11:12.049 }, 00:11:12.049 { 00:11:12.049 "name": "BaseBdev4", 00:11:12.049 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:12.049 "is_configured": true, 00:11:12.049 "data_offset": 2048, 00:11:12.049 "data_size": 63488 00:11:12.049 } 00:11:12.049 ] 00:11:12.049 }' 00:11:12.049 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:12.049 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:12.050 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:12.050 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:12.050 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:12.050 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.050 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.050 [2024-11-26 19:52:02.805887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:12.050 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.050 19:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:12.050 [2024-11-26 19:52:02.852505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:12.050 [2024-11-26 19:52:02.854110] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:12.050 194.00 IOPS, 582.00 MiB/s [2024-11-26T19:52:02.987Z] [2024-11-26 19:52:02.961894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:12.050 [2024-11-26 19:52:02.962859] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:12.308 [2024-11-26 19:52:03.191708] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:12.308 [2024-11-26 19:52:03.191928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:12.874 [2024-11-26 19:52:03.544404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:12.874 [2024-11-26 19:52:03.544633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:13.132 "name": "raid_bdev1", 00:11:13.132 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:13.132 "strip_size_kb": 0, 00:11:13.132 "state": "online", 00:11:13.132 "raid_level": "raid1", 00:11:13.132 "superblock": true, 00:11:13.132 "num_base_bdevs": 4, 00:11:13.132 "num_base_bdevs_discovered": 4, 00:11:13.132 "num_base_bdevs_operational": 4, 00:11:13.132 "process": { 00:11:13.132 "type": "rebuild", 00:11:13.132 "target": "spare", 00:11:13.132 "progress": { 00:11:13.132 "blocks": 12288, 00:11:13.132 "percent": 19 00:11:13.132 } 00:11:13.132 }, 00:11:13.132 "base_bdevs_list": [ 00:11:13.132 { 00:11:13.132 "name": "spare", 00:11:13.132 "uuid": "bdfe947d-2c90-50b6-956b-d1901e30fb49", 00:11:13.132 "is_configured": true, 00:11:13.132 "data_offset": 2048, 00:11:13.132 "data_size": 63488 00:11:13.132 }, 00:11:13.132 { 00:11:13.132 "name": "BaseBdev2", 00:11:13.132 "uuid": "55d33dec-8b84-5043-8b3e-8c64b48fc13f", 00:11:13.132 "is_configured": true, 00:11:13.132 "data_offset": 2048, 00:11:13.132 "data_size": 63488 00:11:13.132 }, 00:11:13.132 { 00:11:13.132 "name": "BaseBdev3", 00:11:13.132 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:13.132 "is_configured": true, 00:11:13.132 "data_offset": 2048, 00:11:13.132 "data_size": 63488 00:11:13.132 }, 00:11:13.132 { 00:11:13.132 "name": "BaseBdev4", 00:11:13.132 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:13.132 "is_configured": true, 00:11:13.132 "data_offset": 2048, 00:11:13.132 "data_size": 63488 00:11:13.132 } 00:11:13.132 ] 00:11:13.132 }' 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:13.132 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.132 19:52:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.132 154.33 IOPS, 463.00 MiB/s [2024-11-26T19:52:04.069Z] [2024-11-26 19:52:03.947564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:13.390 [2024-11-26 19:52:04.206337] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:11:13.390 [2024-11-26 19:52:04.206388] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:11:13.390 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.390 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:13.390 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:13.390 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:13.390 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:13.390 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:13.390 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:13.390 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:13.390 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.390 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.390 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.390 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.390 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.390 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:13.390 "name": "raid_bdev1", 00:11:13.390 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:13.390 "strip_size_kb": 0, 00:11:13.390 "state": "online", 00:11:13.390 "raid_level": "raid1", 00:11:13.391 "superblock": true, 00:11:13.391 "num_base_bdevs": 4, 00:11:13.391 "num_base_bdevs_discovered": 3, 00:11:13.391 "num_base_bdevs_operational": 3, 00:11:13.391 "process": { 00:11:13.391 "type": "rebuild", 00:11:13.391 "target": "spare", 00:11:13.391 "progress": { 00:11:13.391 "blocks": 16384, 00:11:13.391 "percent": 25 00:11:13.391 } 00:11:13.391 }, 00:11:13.391 "base_bdevs_list": [ 00:11:13.391 { 00:11:13.391 "name": "spare", 00:11:13.391 "uuid": "bdfe947d-2c90-50b6-956b-d1901e30fb49", 00:11:13.391 "is_configured": true, 00:11:13.391 "data_offset": 2048, 00:11:13.391 "data_size": 63488 00:11:13.391 }, 00:11:13.391 { 00:11:13.391 "name": null, 00:11:13.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.391 "is_configured": false, 00:11:13.391 "data_offset": 0, 00:11:13.391 "data_size": 63488 00:11:13.391 }, 00:11:13.391 { 00:11:13.391 "name": "BaseBdev3", 00:11:13.391 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:13.391 "is_configured": true, 00:11:13.391 "data_offset": 2048, 00:11:13.391 "data_size": 63488 00:11:13.391 }, 00:11:13.391 { 00:11:13.391 "name": "BaseBdev4", 00:11:13.391 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:13.391 "is_configured": true, 00:11:13.391 "data_offset": 2048, 00:11:13.391 "data_size": 63488 00:11:13.391 } 00:11:13.391 ] 00:11:13.391 }' 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=394 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.391 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.649 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.649 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:13.649 "name": "raid_bdev1", 00:11:13.649 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:13.649 "strip_size_kb": 0, 00:11:13.649 "state": "online", 00:11:13.649 "raid_level": "raid1", 00:11:13.649 "superblock": true, 00:11:13.649 "num_base_bdevs": 4, 00:11:13.649 "num_base_bdevs_discovered": 3, 00:11:13.649 "num_base_bdevs_operational": 3, 00:11:13.649 "process": { 00:11:13.649 "type": "rebuild", 00:11:13.649 "target": "spare", 00:11:13.649 "progress": { 00:11:13.649 "blocks": 16384, 00:11:13.649 "percent": 25 00:11:13.649 } 00:11:13.649 }, 00:11:13.649 "base_bdevs_list": [ 00:11:13.649 { 00:11:13.649 "name": "spare", 00:11:13.649 "uuid": "bdfe947d-2c90-50b6-956b-d1901e30fb49", 00:11:13.649 "is_configured": true, 00:11:13.649 "data_offset": 2048, 00:11:13.649 "data_size": 63488 00:11:13.649 }, 00:11:13.649 { 00:11:13.649 "name": null, 00:11:13.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.649 "is_configured": false, 00:11:13.649 "data_offset": 0, 00:11:13.649 "data_size": 63488 00:11:13.649 }, 00:11:13.649 { 00:11:13.649 "name": "BaseBdev3", 00:11:13.649 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:13.649 "is_configured": true, 00:11:13.649 "data_offset": 2048, 00:11:13.649 "data_size": 63488 00:11:13.649 }, 00:11:13.649 { 00:11:13.649 "name": "BaseBdev4", 00:11:13.649 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:13.649 "is_configured": true, 00:11:13.649 "data_offset": 2048, 00:11:13.649 "data_size": 63488 00:11:13.649 } 00:11:13.649 ] 00:11:13.649 }' 00:11:13.649 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:13.649 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:13.649 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:13.649 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:13.649 19:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:13.649 [2024-11-26 19:52:04.445117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:13.649 [2024-11-26 19:52:04.558749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:14.215 [2024-11-26 19:52:04.906892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:14.215 133.25 IOPS, 399.75 MiB/s [2024-11-26T19:52:05.152Z] [2024-11-26 19:52:05.123100] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:14.215 [2024-11-26 19:52:05.123882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:14.474 [2024-11-26 19:52:05.337866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:14.474 [2024-11-26 19:52:05.338334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:14.732 "name": "raid_bdev1", 00:11:14.732 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:14.732 "strip_size_kb": 0, 00:11:14.732 "state": "online", 00:11:14.732 "raid_level": "raid1", 00:11:14.732 "superblock": true, 00:11:14.732 "num_base_bdevs": 4, 00:11:14.732 "num_base_bdevs_discovered": 3, 00:11:14.732 "num_base_bdevs_operational": 3, 00:11:14.732 "process": { 00:11:14.732 "type": "rebuild", 00:11:14.732 "target": "spare", 00:11:14.732 "progress": { 00:11:14.732 "blocks": 34816, 00:11:14.732 "percent": 54 00:11:14.732 } 00:11:14.732 }, 00:11:14.732 "base_bdevs_list": [ 00:11:14.732 { 00:11:14.732 "name": "spare", 00:11:14.732 "uuid": "bdfe947d-2c90-50b6-956b-d1901e30fb49", 00:11:14.732 "is_configured": true, 00:11:14.732 "data_offset": 2048, 00:11:14.732 "data_size": 63488 00:11:14.732 }, 00:11:14.732 { 00:11:14.732 "name": null, 00:11:14.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.732 "is_configured": false, 00:11:14.732 "data_offset": 0, 00:11:14.732 "data_size": 63488 00:11:14.732 }, 00:11:14.732 { 00:11:14.732 "name": "BaseBdev3", 00:11:14.732 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:14.732 "is_configured": true, 00:11:14.732 "data_offset": 2048, 00:11:14.732 "data_size": 63488 00:11:14.732 }, 00:11:14.732 { 00:11:14.732 "name": "BaseBdev4", 00:11:14.732 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:14.732 "is_configured": true, 00:11:14.732 "data_offset": 2048, 00:11:14.732 "data_size": 63488 00:11:14.732 } 00:11:14.732 ] 00:11:14.732 }' 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:14.732 19:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:14.732 [2024-11-26 19:52:05.659376] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:11:14.990 [2024-11-26 19:52:05.872064] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:15.506 113.60 IOPS, 340.80 MiB/s [2024-11-26T19:52:06.443Z] [2024-11-26 19:52:06.220101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:15.764 "name": "raid_bdev1", 00:11:15.764 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:15.764 "strip_size_kb": 0, 00:11:15.764 "state": "online", 00:11:15.764 "raid_level": "raid1", 00:11:15.764 "superblock": true, 00:11:15.764 "num_base_bdevs": 4, 00:11:15.764 "num_base_bdevs_discovered": 3, 00:11:15.764 "num_base_bdevs_operational": 3, 00:11:15.764 "process": { 00:11:15.764 "type": "rebuild", 00:11:15.764 "target": "spare", 00:11:15.764 "progress": { 00:11:15.764 "blocks": 49152, 00:11:15.764 "percent": 77 00:11:15.764 } 00:11:15.764 }, 00:11:15.764 "base_bdevs_list": [ 00:11:15.764 { 00:11:15.764 "name": "spare", 00:11:15.764 "uuid": "bdfe947d-2c90-50b6-956b-d1901e30fb49", 00:11:15.764 "is_configured": true, 00:11:15.764 "data_offset": 2048, 00:11:15.764 "data_size": 63488 00:11:15.764 }, 00:11:15.764 { 00:11:15.764 "name": null, 00:11:15.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.764 "is_configured": false, 00:11:15.764 "data_offset": 0, 00:11:15.764 "data_size": 63488 00:11:15.764 }, 00:11:15.764 { 00:11:15.764 "name": "BaseBdev3", 00:11:15.764 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:15.764 "is_configured": true, 00:11:15.764 "data_offset": 2048, 00:11:15.764 "data_size": 63488 00:11:15.764 }, 00:11:15.764 { 00:11:15.764 "name": "BaseBdev4", 00:11:15.764 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:15.764 "is_configured": true, 00:11:15.764 "data_offset": 2048, 00:11:15.764 "data_size": 63488 00:11:15.764 } 00:11:15.764 ] 00:11:15.764 }' 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:15.764 19:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:16.587 102.00 IOPS, 306.00 MiB/s [2024-11-26T19:52:07.524Z] [2024-11-26 19:52:07.325932] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:16.587 [2024-11-26 19:52:07.425944] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:16.588 [2024-11-26 19:52:07.434247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.845 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:16.845 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:16.845 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.845 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:16.845 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:16.845 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.845 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.846 "name": "raid_bdev1", 00:11:16.846 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:16.846 "strip_size_kb": 0, 00:11:16.846 "state": "online", 00:11:16.846 "raid_level": "raid1", 00:11:16.846 "superblock": true, 00:11:16.846 "num_base_bdevs": 4, 00:11:16.846 "num_base_bdevs_discovered": 3, 00:11:16.846 "num_base_bdevs_operational": 3, 00:11:16.846 "base_bdevs_list": [ 00:11:16.846 { 00:11:16.846 "name": "spare", 00:11:16.846 "uuid": "bdfe947d-2c90-50b6-956b-d1901e30fb49", 00:11:16.846 "is_configured": true, 00:11:16.846 "data_offset": 2048, 00:11:16.846 "data_size": 63488 00:11:16.846 }, 00:11:16.846 { 00:11:16.846 "name": null, 00:11:16.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.846 "is_configured": false, 00:11:16.846 "data_offset": 0, 00:11:16.846 "data_size": 63488 00:11:16.846 }, 00:11:16.846 { 00:11:16.846 "name": "BaseBdev3", 00:11:16.846 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:16.846 "is_configured": true, 00:11:16.846 "data_offset": 2048, 00:11:16.846 "data_size": 63488 00:11:16.846 }, 00:11:16.846 { 00:11:16.846 "name": "BaseBdev4", 00:11:16.846 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:16.846 "is_configured": true, 00:11:16.846 "data_offset": 2048, 00:11:16.846 "data_size": 63488 00:11:16.846 } 00:11:16.846 ] 00:11:16.846 }' 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.846 "name": "raid_bdev1", 00:11:16.846 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:16.846 "strip_size_kb": 0, 00:11:16.846 "state": "online", 00:11:16.846 "raid_level": "raid1", 00:11:16.846 "superblock": true, 00:11:16.846 "num_base_bdevs": 4, 00:11:16.846 "num_base_bdevs_discovered": 3, 00:11:16.846 "num_base_bdevs_operational": 3, 00:11:16.846 "base_bdevs_list": [ 00:11:16.846 { 00:11:16.846 "name": "spare", 00:11:16.846 "uuid": "bdfe947d-2c90-50b6-956b-d1901e30fb49", 00:11:16.846 "is_configured": true, 00:11:16.846 "data_offset": 2048, 00:11:16.846 "data_size": 63488 00:11:16.846 }, 00:11:16.846 { 00:11:16.846 "name": null, 00:11:16.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.846 "is_configured": false, 00:11:16.846 "data_offset": 0, 00:11:16.846 "data_size": 63488 00:11:16.846 }, 00:11:16.846 { 00:11:16.846 "name": "BaseBdev3", 00:11:16.846 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:16.846 "is_configured": true, 00:11:16.846 "data_offset": 2048, 00:11:16.846 "data_size": 63488 00:11:16.846 }, 00:11:16.846 { 00:11:16.846 "name": "BaseBdev4", 00:11:16.846 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:16.846 "is_configured": true, 00:11:16.846 "data_offset": 2048, 00:11:16.846 "data_size": 63488 00:11:16.846 } 00:11:16.846 ] 00:11:16.846 }' 00:11:16.846 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.104 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.104 "name": "raid_bdev1", 00:11:17.104 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:17.104 "strip_size_kb": 0, 00:11:17.104 "state": "online", 00:11:17.104 "raid_level": "raid1", 00:11:17.104 "superblock": true, 00:11:17.104 "num_base_bdevs": 4, 00:11:17.104 "num_base_bdevs_discovered": 3, 00:11:17.104 "num_base_bdevs_operational": 3, 00:11:17.104 "base_bdevs_list": [ 00:11:17.104 { 00:11:17.104 "name": "spare", 00:11:17.104 "uuid": "bdfe947d-2c90-50b6-956b-d1901e30fb49", 00:11:17.104 "is_configured": true, 00:11:17.104 "data_offset": 2048, 00:11:17.104 "data_size": 63488 00:11:17.105 }, 00:11:17.105 { 00:11:17.105 "name": null, 00:11:17.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.105 "is_configured": false, 00:11:17.105 "data_offset": 0, 00:11:17.105 "data_size": 63488 00:11:17.105 }, 00:11:17.105 { 00:11:17.105 "name": "BaseBdev3", 00:11:17.105 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:17.105 "is_configured": true, 00:11:17.105 "data_offset": 2048, 00:11:17.105 "data_size": 63488 00:11:17.105 }, 00:11:17.105 { 00:11:17.105 "name": "BaseBdev4", 00:11:17.105 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:17.105 "is_configured": true, 00:11:17.105 "data_offset": 2048, 00:11:17.105 "data_size": 63488 00:11:17.105 } 00:11:17.105 ] 00:11:17.105 }' 00:11:17.105 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.105 19:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.363 91.14 IOPS, 273.43 MiB/s [2024-11-26T19:52:08.300Z] 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.363 [2024-11-26 19:52:08.157258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.363 [2024-11-26 19:52:08.157426] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.363 00:11:17.363 Latency(us) 00:11:17.363 [2024-11-26T19:52:08.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:17.363 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:17.363 raid_bdev1 : 7.25 89.68 269.03 0.00 0.00 15762.31 244.18 110503.78 00:11:17.363 [2024-11-26T19:52:08.300Z] =================================================================================================================== 00:11:17.363 [2024-11-26T19:52:08.300Z] Total : 89.68 269.03 0.00 0.00 15762.31 244.18 110503.78 00:11:17.363 { 00:11:17.363 "results": [ 00:11:17.363 { 00:11:17.363 "job": "raid_bdev1", 00:11:17.363 "core_mask": "0x1", 00:11:17.363 "workload": "randrw", 00:11:17.363 "percentage": 50, 00:11:17.363 "status": "finished", 00:11:17.363 "queue_depth": 2, 00:11:17.363 "io_size": 3145728, 00:11:17.363 "runtime": 7.248142, 00:11:17.363 "iops": 89.67815476021303, 00:11:17.363 "mibps": 269.0344642806391, 00:11:17.363 "io_failed": 0, 00:11:17.363 "io_timeout": 0, 00:11:17.363 "avg_latency_us": 15762.314452071007, 00:11:17.363 "min_latency_us": 244.1846153846154, 00:11:17.363 "max_latency_us": 110503.77846153846 00:11:17.363 } 00:11:17.363 ], 00:11:17.363 "core_count": 1 00:11:17.363 } 00:11:17.363 [2024-11-26 19:52:08.205482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.363 [2024-11-26 19:52:08.205519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.363 [2024-11-26 19:52:08.205616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.363 [2024-11-26 19:52:08.205625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:17.363 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:17.621 /dev/nbd0 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.621 1+0 records in 00:11:17.621 1+0 records out 00:11:17.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291262 s, 14.1 MB/s 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:17.621 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:17.622 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:17.622 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:17.622 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:17.622 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:11:17.880 /dev/nbd1 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.880 1+0 records in 00:11:17.880 1+0 records out 00:11:17.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027304 s, 15.0 MB/s 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:17.880 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:18.139 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:18.139 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:18.139 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:18.139 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:18.139 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:18.139 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:18.139 19:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:18.139 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:11:18.397 /dev/nbd1 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.397 1+0 records in 00:11:18.397 1+0 records out 00:11:18.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239613 s, 17.1 MB/s 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:18.397 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:18.655 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.914 [2024-11-26 19:52:09.779102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:18.914 [2024-11-26 19:52:09.779152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.914 [2024-11-26 19:52:09.779173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:18.914 [2024-11-26 19:52:09.779181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.914 [2024-11-26 19:52:09.781208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.914 [2024-11-26 19:52:09.781242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:18.914 [2024-11-26 19:52:09.781325] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:18.914 [2024-11-26 19:52:09.781383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:18.914 [2024-11-26 19:52:09.781502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.914 [2024-11-26 19:52:09.781593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.914 spare 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.914 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.173 [2024-11-26 19:52:09.881689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:19.173 [2024-11-26 19:52:09.881734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:19.173 [2024-11-26 19:52:09.882053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:11:19.173 [2024-11-26 19:52:09.882232] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:19.173 [2024-11-26 19:52:09.882250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:11:19.173 [2024-11-26 19:52:09.882425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.173 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.174 "name": "raid_bdev1", 00:11:19.174 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:19.174 "strip_size_kb": 0, 00:11:19.174 "state": "online", 00:11:19.174 "raid_level": "raid1", 00:11:19.174 "superblock": true, 00:11:19.174 "num_base_bdevs": 4, 00:11:19.174 "num_base_bdevs_discovered": 3, 00:11:19.174 "num_base_bdevs_operational": 3, 00:11:19.174 "base_bdevs_list": [ 00:11:19.174 { 00:11:19.174 "name": "spare", 00:11:19.174 "uuid": "bdfe947d-2c90-50b6-956b-d1901e30fb49", 00:11:19.174 "is_configured": true, 00:11:19.174 "data_offset": 2048, 00:11:19.174 "data_size": 63488 00:11:19.174 }, 00:11:19.174 { 00:11:19.174 "name": null, 00:11:19.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.174 "is_configured": false, 00:11:19.174 "data_offset": 2048, 00:11:19.174 "data_size": 63488 00:11:19.174 }, 00:11:19.174 { 00:11:19.174 "name": "BaseBdev3", 00:11:19.174 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:19.174 "is_configured": true, 00:11:19.174 "data_offset": 2048, 00:11:19.174 "data_size": 63488 00:11:19.174 }, 00:11:19.174 { 00:11:19.174 "name": "BaseBdev4", 00:11:19.174 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:19.174 "is_configured": true, 00:11:19.174 "data_offset": 2048, 00:11:19.174 "data_size": 63488 00:11:19.174 } 00:11:19.174 ] 00:11:19.174 }' 00:11:19.174 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.174 19:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.433 "name": "raid_bdev1", 00:11:19.433 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:19.433 "strip_size_kb": 0, 00:11:19.433 "state": "online", 00:11:19.433 "raid_level": "raid1", 00:11:19.433 "superblock": true, 00:11:19.433 "num_base_bdevs": 4, 00:11:19.433 "num_base_bdevs_discovered": 3, 00:11:19.433 "num_base_bdevs_operational": 3, 00:11:19.433 "base_bdevs_list": [ 00:11:19.433 { 00:11:19.433 "name": "spare", 00:11:19.433 "uuid": "bdfe947d-2c90-50b6-956b-d1901e30fb49", 00:11:19.433 "is_configured": true, 00:11:19.433 "data_offset": 2048, 00:11:19.433 "data_size": 63488 00:11:19.433 }, 00:11:19.433 { 00:11:19.433 "name": null, 00:11:19.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.433 "is_configured": false, 00:11:19.433 "data_offset": 2048, 00:11:19.433 "data_size": 63488 00:11:19.433 }, 00:11:19.433 { 00:11:19.433 "name": "BaseBdev3", 00:11:19.433 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:19.433 "is_configured": true, 00:11:19.433 "data_offset": 2048, 00:11:19.433 "data_size": 63488 00:11:19.433 }, 00:11:19.433 { 00:11:19.433 "name": "BaseBdev4", 00:11:19.433 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:19.433 "is_configured": true, 00:11:19.433 "data_offset": 2048, 00:11:19.433 "data_size": 63488 00:11:19.433 } 00:11:19.433 ] 00:11:19.433 }' 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.433 [2024-11-26 19:52:10.323313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:19.433 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.434 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.434 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.434 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.434 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.434 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.434 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.434 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.434 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.434 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.434 "name": "raid_bdev1", 00:11:19.434 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:19.434 "strip_size_kb": 0, 00:11:19.434 "state": "online", 00:11:19.434 "raid_level": "raid1", 00:11:19.434 "superblock": true, 00:11:19.434 "num_base_bdevs": 4, 00:11:19.434 "num_base_bdevs_discovered": 2, 00:11:19.434 "num_base_bdevs_operational": 2, 00:11:19.434 "base_bdevs_list": [ 00:11:19.434 { 00:11:19.434 "name": null, 00:11:19.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.434 "is_configured": false, 00:11:19.434 "data_offset": 0, 00:11:19.434 "data_size": 63488 00:11:19.434 }, 00:11:19.434 { 00:11:19.434 "name": null, 00:11:19.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.434 "is_configured": false, 00:11:19.434 "data_offset": 2048, 00:11:19.434 "data_size": 63488 00:11:19.434 }, 00:11:19.434 { 00:11:19.434 "name": "BaseBdev3", 00:11:19.434 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:19.434 "is_configured": true, 00:11:19.434 "data_offset": 2048, 00:11:19.434 "data_size": 63488 00:11:19.434 }, 00:11:19.434 { 00:11:19.434 "name": "BaseBdev4", 00:11:19.434 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:19.434 "is_configured": true, 00:11:19.434 "data_offset": 2048, 00:11:19.434 "data_size": 63488 00:11:19.434 } 00:11:19.434 ] 00:11:19.434 }' 00:11:19.434 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.434 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.692 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:19.692 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.692 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.692 [2024-11-26 19:52:10.619437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:19.692 [2024-11-26 19:52:10.619640] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:11:19.692 [2024-11-26 19:52:10.619660] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:19.692 [2024-11-26 19:52:10.619695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:19.950 [2024-11-26 19:52:10.628033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:11:19.950 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.950 19:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:19.950 [2024-11-26 19:52:10.629740] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.893 "name": "raid_bdev1", 00:11:20.893 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:20.893 "strip_size_kb": 0, 00:11:20.893 "state": "online", 00:11:20.893 "raid_level": "raid1", 00:11:20.893 "superblock": true, 00:11:20.893 "num_base_bdevs": 4, 00:11:20.893 "num_base_bdevs_discovered": 3, 00:11:20.893 "num_base_bdevs_operational": 3, 00:11:20.893 "process": { 00:11:20.893 "type": "rebuild", 00:11:20.893 "target": "spare", 00:11:20.893 "progress": { 00:11:20.893 "blocks": 20480, 00:11:20.893 "percent": 32 00:11:20.893 } 00:11:20.893 }, 00:11:20.893 "base_bdevs_list": [ 00:11:20.893 { 00:11:20.893 "name": "spare", 00:11:20.893 "uuid": "bdfe947d-2c90-50b6-956b-d1901e30fb49", 00:11:20.893 "is_configured": true, 00:11:20.893 "data_offset": 2048, 00:11:20.893 "data_size": 63488 00:11:20.893 }, 00:11:20.893 { 00:11:20.893 "name": null, 00:11:20.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.893 "is_configured": false, 00:11:20.893 "data_offset": 2048, 00:11:20.893 "data_size": 63488 00:11:20.893 }, 00:11:20.893 { 00:11:20.893 "name": "BaseBdev3", 00:11:20.893 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:20.893 "is_configured": true, 00:11:20.893 "data_offset": 2048, 00:11:20.893 "data_size": 63488 00:11:20.893 }, 00:11:20.893 { 00:11:20.893 "name": "BaseBdev4", 00:11:20.893 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:20.893 "is_configured": true, 00:11:20.893 "data_offset": 2048, 00:11:20.893 "data_size": 63488 00:11:20.893 } 00:11:20.893 ] 00:11:20.893 }' 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.893 [2024-11-26 19:52:11.727700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:20.893 [2024-11-26 19:52:11.736109] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:20.893 [2024-11-26 19:52:11.736162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.893 [2024-11-26 19:52:11.736182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:20.893 [2024-11-26 19:52:11.736189] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.893 "name": "raid_bdev1", 00:11:20.893 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:20.893 "strip_size_kb": 0, 00:11:20.893 "state": "online", 00:11:20.893 "raid_level": "raid1", 00:11:20.893 "superblock": true, 00:11:20.893 "num_base_bdevs": 4, 00:11:20.893 "num_base_bdevs_discovered": 2, 00:11:20.893 "num_base_bdevs_operational": 2, 00:11:20.893 "base_bdevs_list": [ 00:11:20.893 { 00:11:20.893 "name": null, 00:11:20.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.893 "is_configured": false, 00:11:20.893 "data_offset": 0, 00:11:20.893 "data_size": 63488 00:11:20.893 }, 00:11:20.893 { 00:11:20.893 "name": null, 00:11:20.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.893 "is_configured": false, 00:11:20.893 "data_offset": 2048, 00:11:20.893 "data_size": 63488 00:11:20.893 }, 00:11:20.893 { 00:11:20.893 "name": "BaseBdev3", 00:11:20.893 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:20.893 "is_configured": true, 00:11:20.893 "data_offset": 2048, 00:11:20.893 "data_size": 63488 00:11:20.893 }, 00:11:20.893 { 00:11:20.893 "name": "BaseBdev4", 00:11:20.893 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:20.893 "is_configured": true, 00:11:20.893 "data_offset": 2048, 00:11:20.893 "data_size": 63488 00:11:20.893 } 00:11:20.893 ] 00:11:20.893 }' 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.893 19:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.152 19:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:21.152 19:52:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.152 19:52:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.152 [2024-11-26 19:52:12.066667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:21.152 [2024-11-26 19:52:12.066732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.152 [2024-11-26 19:52:12.066762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:21.152 [2024-11-26 19:52:12.066770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.152 [2024-11-26 19:52:12.067250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.152 [2024-11-26 19:52:12.067275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:21.152 [2024-11-26 19:52:12.067377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:21.152 [2024-11-26 19:52:12.067389] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:11:21.152 [2024-11-26 19:52:12.067400] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:21.152 [2024-11-26 19:52:12.067422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:21.152 [2024-11-26 19:52:12.075690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:11:21.152 spare 00:11:21.152 19:52:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.152 19:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:21.152 [2024-11-26 19:52:12.077439] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:22.526 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:22.526 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:22.526 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:22.526 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:22.526 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:22.526 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.526 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.526 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.526 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.526 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.526 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:22.526 "name": "raid_bdev1", 00:11:22.526 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:22.526 "strip_size_kb": 0, 00:11:22.526 "state": "online", 00:11:22.526 "raid_level": "raid1", 00:11:22.526 "superblock": true, 00:11:22.526 "num_base_bdevs": 4, 00:11:22.526 "num_base_bdevs_discovered": 3, 00:11:22.526 "num_base_bdevs_operational": 3, 00:11:22.526 "process": { 00:11:22.526 "type": "rebuild", 00:11:22.526 "target": "spare", 00:11:22.526 "progress": { 00:11:22.527 "blocks": 20480, 00:11:22.527 "percent": 32 00:11:22.527 } 00:11:22.527 }, 00:11:22.527 "base_bdevs_list": [ 00:11:22.527 { 00:11:22.527 "name": "spare", 00:11:22.527 "uuid": "bdfe947d-2c90-50b6-956b-d1901e30fb49", 00:11:22.527 "is_configured": true, 00:11:22.527 "data_offset": 2048, 00:11:22.527 "data_size": 63488 00:11:22.527 }, 00:11:22.527 { 00:11:22.527 "name": null, 00:11:22.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.527 "is_configured": false, 00:11:22.527 "data_offset": 2048, 00:11:22.527 "data_size": 63488 00:11:22.527 }, 00:11:22.527 { 00:11:22.527 "name": "BaseBdev3", 00:11:22.527 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:22.527 "is_configured": true, 00:11:22.527 "data_offset": 2048, 00:11:22.527 "data_size": 63488 00:11:22.527 }, 00:11:22.527 { 00:11:22.527 "name": "BaseBdev4", 00:11:22.527 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:22.527 "is_configured": true, 00:11:22.527 "data_offset": 2048, 00:11:22.527 "data_size": 63488 00:11:22.527 } 00:11:22.527 ] 00:11:22.527 }' 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.527 [2024-11-26 19:52:13.187918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:22.527 [2024-11-26 19:52:13.284445] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:22.527 [2024-11-26 19:52:13.284540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.527 [2024-11-26 19:52:13.284554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:22.527 [2024-11-26 19:52:13.284563] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.527 "name": "raid_bdev1", 00:11:22.527 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:22.527 "strip_size_kb": 0, 00:11:22.527 "state": "online", 00:11:22.527 "raid_level": "raid1", 00:11:22.527 "superblock": true, 00:11:22.527 "num_base_bdevs": 4, 00:11:22.527 "num_base_bdevs_discovered": 2, 00:11:22.527 "num_base_bdevs_operational": 2, 00:11:22.527 "base_bdevs_list": [ 00:11:22.527 { 00:11:22.527 "name": null, 00:11:22.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.527 "is_configured": false, 00:11:22.527 "data_offset": 0, 00:11:22.527 "data_size": 63488 00:11:22.527 }, 00:11:22.527 { 00:11:22.527 "name": null, 00:11:22.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.527 "is_configured": false, 00:11:22.527 "data_offset": 2048, 00:11:22.527 "data_size": 63488 00:11:22.527 }, 00:11:22.527 { 00:11:22.527 "name": "BaseBdev3", 00:11:22.527 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:22.527 "is_configured": true, 00:11:22.527 "data_offset": 2048, 00:11:22.527 "data_size": 63488 00:11:22.527 }, 00:11:22.527 { 00:11:22.527 "name": "BaseBdev4", 00:11:22.527 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:22.527 "is_configured": true, 00:11:22.527 "data_offset": 2048, 00:11:22.527 "data_size": 63488 00:11:22.527 } 00:11:22.527 ] 00:11:22.527 }' 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.527 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:22.786 "name": "raid_bdev1", 00:11:22.786 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:22.786 "strip_size_kb": 0, 00:11:22.786 "state": "online", 00:11:22.786 "raid_level": "raid1", 00:11:22.786 "superblock": true, 00:11:22.786 "num_base_bdevs": 4, 00:11:22.786 "num_base_bdevs_discovered": 2, 00:11:22.786 "num_base_bdevs_operational": 2, 00:11:22.786 "base_bdevs_list": [ 00:11:22.786 { 00:11:22.786 "name": null, 00:11:22.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.786 "is_configured": false, 00:11:22.786 "data_offset": 0, 00:11:22.786 "data_size": 63488 00:11:22.786 }, 00:11:22.786 { 00:11:22.786 "name": null, 00:11:22.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.786 "is_configured": false, 00:11:22.786 "data_offset": 2048, 00:11:22.786 "data_size": 63488 00:11:22.786 }, 00:11:22.786 { 00:11:22.786 "name": "BaseBdev3", 00:11:22.786 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:22.786 "is_configured": true, 00:11:22.786 "data_offset": 2048, 00:11:22.786 "data_size": 63488 00:11:22.786 }, 00:11:22.786 { 00:11:22.786 "name": "BaseBdev4", 00:11:22.786 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:22.786 "is_configured": true, 00:11:22.786 "data_offset": 2048, 00:11:22.786 "data_size": 63488 00:11:22.786 } 00:11:22.786 ] 00:11:22.786 }' 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.786 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.786 [2024-11-26 19:52:13.718706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:22.786 [2024-11-26 19:52:13.718765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.786 [2024-11-26 19:52:13.718784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:11:22.786 [2024-11-26 19:52:13.718794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.786 [2024-11-26 19:52:13.719230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.786 [2024-11-26 19:52:13.719255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:22.786 [2024-11-26 19:52:13.719328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:22.786 [2024-11-26 19:52:13.719361] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:11:22.786 [2024-11-26 19:52:13.719368] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:22.786 [2024-11-26 19:52:13.719380] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:23.045 BaseBdev1 00:11:23.045 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.045 19:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.978 "name": "raid_bdev1", 00:11:23.978 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:23.978 "strip_size_kb": 0, 00:11:23.978 "state": "online", 00:11:23.978 "raid_level": "raid1", 00:11:23.978 "superblock": true, 00:11:23.978 "num_base_bdevs": 4, 00:11:23.978 "num_base_bdevs_discovered": 2, 00:11:23.978 "num_base_bdevs_operational": 2, 00:11:23.978 "base_bdevs_list": [ 00:11:23.978 { 00:11:23.978 "name": null, 00:11:23.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.978 "is_configured": false, 00:11:23.978 "data_offset": 0, 00:11:23.978 "data_size": 63488 00:11:23.978 }, 00:11:23.978 { 00:11:23.978 "name": null, 00:11:23.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.978 "is_configured": false, 00:11:23.978 "data_offset": 2048, 00:11:23.978 "data_size": 63488 00:11:23.978 }, 00:11:23.978 { 00:11:23.978 "name": "BaseBdev3", 00:11:23.978 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:23.978 "is_configured": true, 00:11:23.978 "data_offset": 2048, 00:11:23.978 "data_size": 63488 00:11:23.978 }, 00:11:23.978 { 00:11:23.978 "name": "BaseBdev4", 00:11:23.978 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:23.978 "is_configured": true, 00:11:23.978 "data_offset": 2048, 00:11:23.978 "data_size": 63488 00:11:23.978 } 00:11:23.978 ] 00:11:23.978 }' 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.978 19:52:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:24.236 "name": "raid_bdev1", 00:11:24.236 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:24.236 "strip_size_kb": 0, 00:11:24.236 "state": "online", 00:11:24.236 "raid_level": "raid1", 00:11:24.236 "superblock": true, 00:11:24.236 "num_base_bdevs": 4, 00:11:24.236 "num_base_bdevs_discovered": 2, 00:11:24.236 "num_base_bdevs_operational": 2, 00:11:24.236 "base_bdevs_list": [ 00:11:24.236 { 00:11:24.236 "name": null, 00:11:24.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.236 "is_configured": false, 00:11:24.236 "data_offset": 0, 00:11:24.236 "data_size": 63488 00:11:24.236 }, 00:11:24.236 { 00:11:24.236 "name": null, 00:11:24.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.236 "is_configured": false, 00:11:24.236 "data_offset": 2048, 00:11:24.236 "data_size": 63488 00:11:24.236 }, 00:11:24.236 { 00:11:24.236 "name": "BaseBdev3", 00:11:24.236 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:24.236 "is_configured": true, 00:11:24.236 "data_offset": 2048, 00:11:24.236 "data_size": 63488 00:11:24.236 }, 00:11:24.236 { 00:11:24.236 "name": "BaseBdev4", 00:11:24.236 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:24.236 "is_configured": true, 00:11:24.236 "data_offset": 2048, 00:11:24.236 "data_size": 63488 00:11:24.236 } 00:11:24.236 ] 00:11:24.236 }' 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.236 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:24.237 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.237 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:24.237 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.237 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.237 [2024-11-26 19:52:15.147186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.237 [2024-11-26 19:52:15.147395] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:11:24.237 [2024-11-26 19:52:15.147414] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:24.237 request: 00:11:24.237 { 00:11:24.237 "base_bdev": "BaseBdev1", 00:11:24.237 "raid_bdev": "raid_bdev1", 00:11:24.237 "method": "bdev_raid_add_base_bdev", 00:11:24.237 "req_id": 1 00:11:24.237 } 00:11:24.237 Got JSON-RPC error response 00:11:24.237 response: 00:11:24.237 { 00:11:24.237 "code": -22, 00:11:24.237 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:24.237 } 00:11:24.237 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:24.237 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:11:24.237 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:24.237 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:24.237 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:24.237 19:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.610 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.610 "name": "raid_bdev1", 00:11:25.610 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:25.610 "strip_size_kb": 0, 00:11:25.610 "state": "online", 00:11:25.610 "raid_level": "raid1", 00:11:25.610 "superblock": true, 00:11:25.610 "num_base_bdevs": 4, 00:11:25.610 "num_base_bdevs_discovered": 2, 00:11:25.610 "num_base_bdevs_operational": 2, 00:11:25.610 "base_bdevs_list": [ 00:11:25.610 { 00:11:25.610 "name": null, 00:11:25.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.610 "is_configured": false, 00:11:25.610 "data_offset": 0, 00:11:25.610 "data_size": 63488 00:11:25.610 }, 00:11:25.610 { 00:11:25.610 "name": null, 00:11:25.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.610 "is_configured": false, 00:11:25.610 "data_offset": 2048, 00:11:25.610 "data_size": 63488 00:11:25.610 }, 00:11:25.610 { 00:11:25.610 "name": "BaseBdev3", 00:11:25.611 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:25.611 "is_configured": true, 00:11:25.611 "data_offset": 2048, 00:11:25.611 "data_size": 63488 00:11:25.611 }, 00:11:25.611 { 00:11:25.611 "name": "BaseBdev4", 00:11:25.611 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:25.611 "is_configured": true, 00:11:25.611 "data_offset": 2048, 00:11:25.611 "data_size": 63488 00:11:25.611 } 00:11:25.611 ] 00:11:25.611 }' 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.611 "name": "raid_bdev1", 00:11:25.611 "uuid": "7c9461de-7ff5-4e18-ab2b-6c7188675bea", 00:11:25.611 "strip_size_kb": 0, 00:11:25.611 "state": "online", 00:11:25.611 "raid_level": "raid1", 00:11:25.611 "superblock": true, 00:11:25.611 "num_base_bdevs": 4, 00:11:25.611 "num_base_bdevs_discovered": 2, 00:11:25.611 "num_base_bdevs_operational": 2, 00:11:25.611 "base_bdevs_list": [ 00:11:25.611 { 00:11:25.611 "name": null, 00:11:25.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.611 "is_configured": false, 00:11:25.611 "data_offset": 0, 00:11:25.611 "data_size": 63488 00:11:25.611 }, 00:11:25.611 { 00:11:25.611 "name": null, 00:11:25.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.611 "is_configured": false, 00:11:25.611 "data_offset": 2048, 00:11:25.611 "data_size": 63488 00:11:25.611 }, 00:11:25.611 { 00:11:25.611 "name": "BaseBdev3", 00:11:25.611 "uuid": "a4d7e767-a183-5511-a1dc-960b83eb89c7", 00:11:25.611 "is_configured": true, 00:11:25.611 "data_offset": 2048, 00:11:25.611 "data_size": 63488 00:11:25.611 }, 00:11:25.611 { 00:11:25.611 "name": "BaseBdev4", 00:11:25.611 "uuid": "76057de4-51e1-5aaa-8c24-a6b91c2d4b44", 00:11:25.611 "is_configured": true, 00:11:25.611 "data_offset": 2048, 00:11:25.611 "data_size": 63488 00:11:25.611 } 00:11:25.611 ] 00:11:25.611 }' 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76966 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76966 ']' 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76966 00:11:25.611 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:11:25.869 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.869 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76966 00:11:25.869 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.869 killing process with pid 76966 00:11:25.869 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.869 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76966' 00:11:25.869 Received shutdown signal, test time was about 15.626059 seconds 00:11:25.869 00:11:25.869 Latency(us) 00:11:25.869 [2024-11-26T19:52:16.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.869 [2024-11-26T19:52:16.806Z] =================================================================================================================== 00:11:25.869 [2024-11-26T19:52:16.806Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:25.869 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76966 00:11:25.869 [2024-11-26 19:52:16.571144] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:25.869 19:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76966 00:11:25.869 [2024-11-26 19:52:16.571272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.869 [2024-11-26 19:52:16.571361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.869 [2024-11-26 19:52:16.571372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:11:25.869 [2024-11-26 19:52:16.787414] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.805 19:52:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:26.805 00:11:26.805 real 0m18.101s 00:11:26.805 user 0m23.048s 00:11:26.805 sys 0m1.787s 00:11:26.805 19:52:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.805 19:52:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.805 ************************************ 00:11:26.805 END TEST raid_rebuild_test_sb_io 00:11:26.805 ************************************ 00:11:26.805 19:52:17 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:11:26.805 19:52:17 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:11:26.805 19:52:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:26.805 19:52:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.805 19:52:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.805 ************************************ 00:11:26.805 START TEST raid5f_state_function_test 00:11:26.805 ************************************ 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77660 00:11:26.805 Process raid pid: 77660 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77660' 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77660 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 77660 ']' 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.805 19:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.805 [2024-11-26 19:52:17.537534] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:11:26.805 [2024-11-26 19:52:17.537669] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.806 [2024-11-26 19:52:17.697132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.064 [2024-11-26 19:52:17.793632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.064 [2024-11-26 19:52:17.914968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.064 [2024-11-26 19:52:17.915004] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.629 [2024-11-26 19:52:18.387941] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.629 [2024-11-26 19:52:18.387997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.629 [2024-11-26 19:52:18.388006] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.629 [2024-11-26 19:52:18.388014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.629 [2024-11-26 19:52:18.388020] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.629 [2024-11-26 19:52:18.388028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.629 "name": "Existed_Raid", 00:11:27.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.629 "strip_size_kb": 64, 00:11:27.629 "state": "configuring", 00:11:27.629 "raid_level": "raid5f", 00:11:27.629 "superblock": false, 00:11:27.629 "num_base_bdevs": 3, 00:11:27.629 "num_base_bdevs_discovered": 0, 00:11:27.629 "num_base_bdevs_operational": 3, 00:11:27.629 "base_bdevs_list": [ 00:11:27.629 { 00:11:27.629 "name": "BaseBdev1", 00:11:27.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.629 "is_configured": false, 00:11:27.629 "data_offset": 0, 00:11:27.629 "data_size": 0 00:11:27.629 }, 00:11:27.629 { 00:11:27.629 "name": "BaseBdev2", 00:11:27.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.629 "is_configured": false, 00:11:27.629 "data_offset": 0, 00:11:27.629 "data_size": 0 00:11:27.629 }, 00:11:27.629 { 00:11:27.629 "name": "BaseBdev3", 00:11:27.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.629 "is_configured": false, 00:11:27.629 "data_offset": 0, 00:11:27.629 "data_size": 0 00:11:27.629 } 00:11:27.629 ] 00:11:27.629 }' 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.629 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.888 [2024-11-26 19:52:18.687917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.888 [2024-11-26 19:52:18.687954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.888 [2024-11-26 19:52:18.695913] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.888 [2024-11-26 19:52:18.695953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.888 [2024-11-26 19:52:18.695960] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.888 [2024-11-26 19:52:18.695968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.888 [2024-11-26 19:52:18.695973] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.888 [2024-11-26 19:52:18.695980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.888 [2024-11-26 19:52:18.726015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.888 BaseBdev1 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.888 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.888 [ 00:11:27.888 { 00:11:27.888 "name": "BaseBdev1", 00:11:27.888 "aliases": [ 00:11:27.888 "7acb3551-7326-4599-8e63-d7bbe3bbb3d2" 00:11:27.888 ], 00:11:27.888 "product_name": "Malloc disk", 00:11:27.888 "block_size": 512, 00:11:27.888 "num_blocks": 65536, 00:11:27.888 "uuid": "7acb3551-7326-4599-8e63-d7bbe3bbb3d2", 00:11:27.888 "assigned_rate_limits": { 00:11:27.888 "rw_ios_per_sec": 0, 00:11:27.888 "rw_mbytes_per_sec": 0, 00:11:27.888 "r_mbytes_per_sec": 0, 00:11:27.888 "w_mbytes_per_sec": 0 00:11:27.888 }, 00:11:27.888 "claimed": true, 00:11:27.889 "claim_type": "exclusive_write", 00:11:27.889 "zoned": false, 00:11:27.889 "supported_io_types": { 00:11:27.889 "read": true, 00:11:27.889 "write": true, 00:11:27.889 "unmap": true, 00:11:27.889 "flush": true, 00:11:27.889 "reset": true, 00:11:27.889 "nvme_admin": false, 00:11:27.889 "nvme_io": false, 00:11:27.889 "nvme_io_md": false, 00:11:27.889 "write_zeroes": true, 00:11:27.889 "zcopy": true, 00:11:27.889 "get_zone_info": false, 00:11:27.889 "zone_management": false, 00:11:27.889 "zone_append": false, 00:11:27.889 "compare": false, 00:11:27.889 "compare_and_write": false, 00:11:27.889 "abort": true, 00:11:27.889 "seek_hole": false, 00:11:27.889 "seek_data": false, 00:11:27.889 "copy": true, 00:11:27.889 "nvme_iov_md": false 00:11:27.889 }, 00:11:27.889 "memory_domains": [ 00:11:27.889 { 00:11:27.889 "dma_device_id": "system", 00:11:27.889 "dma_device_type": 1 00:11:27.889 }, 00:11:27.889 { 00:11:27.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.889 "dma_device_type": 2 00:11:27.889 } 00:11:27.889 ], 00:11:27.889 "driver_specific": {} 00:11:27.889 } 00:11:27.889 ] 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.889 "name": "Existed_Raid", 00:11:27.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.889 "strip_size_kb": 64, 00:11:27.889 "state": "configuring", 00:11:27.889 "raid_level": "raid5f", 00:11:27.889 "superblock": false, 00:11:27.889 "num_base_bdevs": 3, 00:11:27.889 "num_base_bdevs_discovered": 1, 00:11:27.889 "num_base_bdevs_operational": 3, 00:11:27.889 "base_bdevs_list": [ 00:11:27.889 { 00:11:27.889 "name": "BaseBdev1", 00:11:27.889 "uuid": "7acb3551-7326-4599-8e63-d7bbe3bbb3d2", 00:11:27.889 "is_configured": true, 00:11:27.889 "data_offset": 0, 00:11:27.889 "data_size": 65536 00:11:27.889 }, 00:11:27.889 { 00:11:27.889 "name": "BaseBdev2", 00:11:27.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.889 "is_configured": false, 00:11:27.889 "data_offset": 0, 00:11:27.889 "data_size": 0 00:11:27.889 }, 00:11:27.889 { 00:11:27.889 "name": "BaseBdev3", 00:11:27.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.889 "is_configured": false, 00:11:27.889 "data_offset": 0, 00:11:27.889 "data_size": 0 00:11:27.889 } 00:11:27.889 ] 00:11:27.889 }' 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.889 19:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.148 [2024-11-26 19:52:19.066113] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.148 [2024-11-26 19:52:19.066162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.148 [2024-11-26 19:52:19.078161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.148 [2024-11-26 19:52:19.079804] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.148 [2024-11-26 19:52:19.079844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.148 [2024-11-26 19:52:19.079852] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.148 [2024-11-26 19:52:19.079860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:28.148 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.406 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.406 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.406 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.406 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.406 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.406 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.406 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.406 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.406 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.406 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.406 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.406 "name": "Existed_Raid", 00:11:28.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.406 "strip_size_kb": 64, 00:11:28.406 "state": "configuring", 00:11:28.406 "raid_level": "raid5f", 00:11:28.406 "superblock": false, 00:11:28.406 "num_base_bdevs": 3, 00:11:28.406 "num_base_bdevs_discovered": 1, 00:11:28.406 "num_base_bdevs_operational": 3, 00:11:28.406 "base_bdevs_list": [ 00:11:28.406 { 00:11:28.406 "name": "BaseBdev1", 00:11:28.406 "uuid": "7acb3551-7326-4599-8e63-d7bbe3bbb3d2", 00:11:28.406 "is_configured": true, 00:11:28.406 "data_offset": 0, 00:11:28.406 "data_size": 65536 00:11:28.406 }, 00:11:28.406 { 00:11:28.406 "name": "BaseBdev2", 00:11:28.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.406 "is_configured": false, 00:11:28.406 "data_offset": 0, 00:11:28.406 "data_size": 0 00:11:28.406 }, 00:11:28.406 { 00:11:28.406 "name": "BaseBdev3", 00:11:28.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.406 "is_configured": false, 00:11:28.406 "data_offset": 0, 00:11:28.406 "data_size": 0 00:11:28.406 } 00:11:28.406 ] 00:11:28.406 }' 00:11:28.406 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.406 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.665 [2024-11-26 19:52:19.430551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.665 BaseBdev2 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.665 [ 00:11:28.665 { 00:11:28.665 "name": "BaseBdev2", 00:11:28.665 "aliases": [ 00:11:28.665 "09046e46-1694-4cde-be83-7a48019dca50" 00:11:28.665 ], 00:11:28.665 "product_name": "Malloc disk", 00:11:28.665 "block_size": 512, 00:11:28.665 "num_blocks": 65536, 00:11:28.665 "uuid": "09046e46-1694-4cde-be83-7a48019dca50", 00:11:28.665 "assigned_rate_limits": { 00:11:28.665 "rw_ios_per_sec": 0, 00:11:28.665 "rw_mbytes_per_sec": 0, 00:11:28.665 "r_mbytes_per_sec": 0, 00:11:28.665 "w_mbytes_per_sec": 0 00:11:28.665 }, 00:11:28.665 "claimed": true, 00:11:28.665 "claim_type": "exclusive_write", 00:11:28.665 "zoned": false, 00:11:28.665 "supported_io_types": { 00:11:28.665 "read": true, 00:11:28.665 "write": true, 00:11:28.665 "unmap": true, 00:11:28.665 "flush": true, 00:11:28.665 "reset": true, 00:11:28.665 "nvme_admin": false, 00:11:28.665 "nvme_io": false, 00:11:28.665 "nvme_io_md": false, 00:11:28.665 "write_zeroes": true, 00:11:28.665 "zcopy": true, 00:11:28.665 "get_zone_info": false, 00:11:28.665 "zone_management": false, 00:11:28.665 "zone_append": false, 00:11:28.665 "compare": false, 00:11:28.665 "compare_and_write": false, 00:11:28.665 "abort": true, 00:11:28.665 "seek_hole": false, 00:11:28.665 "seek_data": false, 00:11:28.665 "copy": true, 00:11:28.665 "nvme_iov_md": false 00:11:28.665 }, 00:11:28.665 "memory_domains": [ 00:11:28.665 { 00:11:28.665 "dma_device_id": "system", 00:11:28.665 "dma_device_type": 1 00:11:28.665 }, 00:11:28.665 { 00:11:28.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.665 "dma_device_type": 2 00:11:28.665 } 00:11:28.665 ], 00:11:28.665 "driver_specific": {} 00:11:28.665 } 00:11:28.665 ] 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.665 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.666 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.666 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.666 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.666 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.666 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.666 "name": "Existed_Raid", 00:11:28.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.666 "strip_size_kb": 64, 00:11:28.666 "state": "configuring", 00:11:28.666 "raid_level": "raid5f", 00:11:28.666 "superblock": false, 00:11:28.666 "num_base_bdevs": 3, 00:11:28.666 "num_base_bdevs_discovered": 2, 00:11:28.666 "num_base_bdevs_operational": 3, 00:11:28.666 "base_bdevs_list": [ 00:11:28.666 { 00:11:28.666 "name": "BaseBdev1", 00:11:28.666 "uuid": "7acb3551-7326-4599-8e63-d7bbe3bbb3d2", 00:11:28.666 "is_configured": true, 00:11:28.666 "data_offset": 0, 00:11:28.666 "data_size": 65536 00:11:28.666 }, 00:11:28.666 { 00:11:28.666 "name": "BaseBdev2", 00:11:28.666 "uuid": "09046e46-1694-4cde-be83-7a48019dca50", 00:11:28.666 "is_configured": true, 00:11:28.666 "data_offset": 0, 00:11:28.666 "data_size": 65536 00:11:28.666 }, 00:11:28.666 { 00:11:28.666 "name": "BaseBdev3", 00:11:28.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.666 "is_configured": false, 00:11:28.666 "data_offset": 0, 00:11:28.666 "data_size": 0 00:11:28.666 } 00:11:28.666 ] 00:11:28.666 }' 00:11:28.666 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.666 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.924 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:28.924 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.924 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.924 [2024-11-26 19:52:19.798773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.924 [2024-11-26 19:52:19.798827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:28.924 [2024-11-26 19:52:19.798839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:28.924 [2024-11-26 19:52:19.799078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:28.924 [2024-11-26 19:52:19.802205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:28.924 [2024-11-26 19:52:19.802228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:28.924 [2024-11-26 19:52:19.802464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.924 BaseBdev3 00:11:28.924 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.924 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:28.924 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:28.924 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.924 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.924 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.924 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.924 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.925 [ 00:11:28.925 { 00:11:28.925 "name": "BaseBdev3", 00:11:28.925 "aliases": [ 00:11:28.925 "e211bbdd-e3aa-4a26-8d79-ed98e74e5be1" 00:11:28.925 ], 00:11:28.925 "product_name": "Malloc disk", 00:11:28.925 "block_size": 512, 00:11:28.925 "num_blocks": 65536, 00:11:28.925 "uuid": "e211bbdd-e3aa-4a26-8d79-ed98e74e5be1", 00:11:28.925 "assigned_rate_limits": { 00:11:28.925 "rw_ios_per_sec": 0, 00:11:28.925 "rw_mbytes_per_sec": 0, 00:11:28.925 "r_mbytes_per_sec": 0, 00:11:28.925 "w_mbytes_per_sec": 0 00:11:28.925 }, 00:11:28.925 "claimed": true, 00:11:28.925 "claim_type": "exclusive_write", 00:11:28.925 "zoned": false, 00:11:28.925 "supported_io_types": { 00:11:28.925 "read": true, 00:11:28.925 "write": true, 00:11:28.925 "unmap": true, 00:11:28.925 "flush": true, 00:11:28.925 "reset": true, 00:11:28.925 "nvme_admin": false, 00:11:28.925 "nvme_io": false, 00:11:28.925 "nvme_io_md": false, 00:11:28.925 "write_zeroes": true, 00:11:28.925 "zcopy": true, 00:11:28.925 "get_zone_info": false, 00:11:28.925 "zone_management": false, 00:11:28.925 "zone_append": false, 00:11:28.925 "compare": false, 00:11:28.925 "compare_and_write": false, 00:11:28.925 "abort": true, 00:11:28.925 "seek_hole": false, 00:11:28.925 "seek_data": false, 00:11:28.925 "copy": true, 00:11:28.925 "nvme_iov_md": false 00:11:28.925 }, 00:11:28.925 "memory_domains": [ 00:11:28.925 { 00:11:28.925 "dma_device_id": "system", 00:11:28.925 "dma_device_type": 1 00:11:28.925 }, 00:11:28.925 { 00:11:28.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.925 "dma_device_type": 2 00:11:28.925 } 00:11:28.925 ], 00:11:28.925 "driver_specific": {} 00:11:28.925 } 00:11:28.925 ] 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.925 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.184 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.184 "name": "Existed_Raid", 00:11:29.184 "uuid": "d0c7639f-29e3-4e31-a13d-2f5789ace34d", 00:11:29.184 "strip_size_kb": 64, 00:11:29.184 "state": "online", 00:11:29.184 "raid_level": "raid5f", 00:11:29.184 "superblock": false, 00:11:29.184 "num_base_bdevs": 3, 00:11:29.184 "num_base_bdevs_discovered": 3, 00:11:29.184 "num_base_bdevs_operational": 3, 00:11:29.184 "base_bdevs_list": [ 00:11:29.184 { 00:11:29.184 "name": "BaseBdev1", 00:11:29.184 "uuid": "7acb3551-7326-4599-8e63-d7bbe3bbb3d2", 00:11:29.184 "is_configured": true, 00:11:29.184 "data_offset": 0, 00:11:29.184 "data_size": 65536 00:11:29.184 }, 00:11:29.184 { 00:11:29.184 "name": "BaseBdev2", 00:11:29.184 "uuid": "09046e46-1694-4cde-be83-7a48019dca50", 00:11:29.184 "is_configured": true, 00:11:29.184 "data_offset": 0, 00:11:29.184 "data_size": 65536 00:11:29.184 }, 00:11:29.184 { 00:11:29.184 "name": "BaseBdev3", 00:11:29.184 "uuid": "e211bbdd-e3aa-4a26-8d79-ed98e74e5be1", 00:11:29.184 "is_configured": true, 00:11:29.184 "data_offset": 0, 00:11:29.184 "data_size": 65536 00:11:29.184 } 00:11:29.184 ] 00:11:29.184 }' 00:11:29.184 19:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.184 19:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.442 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:29.442 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:29.442 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.442 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.442 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.442 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.442 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.442 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:29.442 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.442 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.442 [2024-11-26 19:52:20.130219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.442 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.442 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.442 "name": "Existed_Raid", 00:11:29.442 "aliases": [ 00:11:29.442 "d0c7639f-29e3-4e31-a13d-2f5789ace34d" 00:11:29.442 ], 00:11:29.442 "product_name": "Raid Volume", 00:11:29.442 "block_size": 512, 00:11:29.442 "num_blocks": 131072, 00:11:29.442 "uuid": "d0c7639f-29e3-4e31-a13d-2f5789ace34d", 00:11:29.442 "assigned_rate_limits": { 00:11:29.442 "rw_ios_per_sec": 0, 00:11:29.442 "rw_mbytes_per_sec": 0, 00:11:29.442 "r_mbytes_per_sec": 0, 00:11:29.442 "w_mbytes_per_sec": 0 00:11:29.442 }, 00:11:29.442 "claimed": false, 00:11:29.442 "zoned": false, 00:11:29.442 "supported_io_types": { 00:11:29.442 "read": true, 00:11:29.442 "write": true, 00:11:29.442 "unmap": false, 00:11:29.442 "flush": false, 00:11:29.442 "reset": true, 00:11:29.442 "nvme_admin": false, 00:11:29.442 "nvme_io": false, 00:11:29.442 "nvme_io_md": false, 00:11:29.442 "write_zeroes": true, 00:11:29.442 "zcopy": false, 00:11:29.443 "get_zone_info": false, 00:11:29.443 "zone_management": false, 00:11:29.443 "zone_append": false, 00:11:29.443 "compare": false, 00:11:29.443 "compare_and_write": false, 00:11:29.443 "abort": false, 00:11:29.443 "seek_hole": false, 00:11:29.443 "seek_data": false, 00:11:29.443 "copy": false, 00:11:29.443 "nvme_iov_md": false 00:11:29.443 }, 00:11:29.443 "driver_specific": { 00:11:29.443 "raid": { 00:11:29.443 "uuid": "d0c7639f-29e3-4e31-a13d-2f5789ace34d", 00:11:29.443 "strip_size_kb": 64, 00:11:29.443 "state": "online", 00:11:29.443 "raid_level": "raid5f", 00:11:29.443 "superblock": false, 00:11:29.443 "num_base_bdevs": 3, 00:11:29.443 "num_base_bdevs_discovered": 3, 00:11:29.443 "num_base_bdevs_operational": 3, 00:11:29.443 "base_bdevs_list": [ 00:11:29.443 { 00:11:29.443 "name": "BaseBdev1", 00:11:29.443 "uuid": "7acb3551-7326-4599-8e63-d7bbe3bbb3d2", 00:11:29.443 "is_configured": true, 00:11:29.443 "data_offset": 0, 00:11:29.443 "data_size": 65536 00:11:29.443 }, 00:11:29.443 { 00:11:29.443 "name": "BaseBdev2", 00:11:29.443 "uuid": "09046e46-1694-4cde-be83-7a48019dca50", 00:11:29.443 "is_configured": true, 00:11:29.443 "data_offset": 0, 00:11:29.443 "data_size": 65536 00:11:29.443 }, 00:11:29.443 { 00:11:29.443 "name": "BaseBdev3", 00:11:29.443 "uuid": "e211bbdd-e3aa-4a26-8d79-ed98e74e5be1", 00:11:29.443 "is_configured": true, 00:11:29.443 "data_offset": 0, 00:11:29.443 "data_size": 65536 00:11:29.443 } 00:11:29.443 ] 00:11:29.443 } 00:11:29.443 } 00:11:29.443 }' 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:29.443 BaseBdev2 00:11:29.443 BaseBdev3' 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.443 [2024-11-26 19:52:20.326087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:29.443 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.701 "name": "Existed_Raid", 00:11:29.701 "uuid": "d0c7639f-29e3-4e31-a13d-2f5789ace34d", 00:11:29.701 "strip_size_kb": 64, 00:11:29.701 "state": "online", 00:11:29.701 "raid_level": "raid5f", 00:11:29.701 "superblock": false, 00:11:29.701 "num_base_bdevs": 3, 00:11:29.701 "num_base_bdevs_discovered": 2, 00:11:29.701 "num_base_bdevs_operational": 2, 00:11:29.701 "base_bdevs_list": [ 00:11:29.701 { 00:11:29.701 "name": null, 00:11:29.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.701 "is_configured": false, 00:11:29.701 "data_offset": 0, 00:11:29.701 "data_size": 65536 00:11:29.701 }, 00:11:29.701 { 00:11:29.701 "name": "BaseBdev2", 00:11:29.701 "uuid": "09046e46-1694-4cde-be83-7a48019dca50", 00:11:29.701 "is_configured": true, 00:11:29.701 "data_offset": 0, 00:11:29.701 "data_size": 65536 00:11:29.701 }, 00:11:29.701 { 00:11:29.701 "name": "BaseBdev3", 00:11:29.701 "uuid": "e211bbdd-e3aa-4a26-8d79-ed98e74e5be1", 00:11:29.701 "is_configured": true, 00:11:29.701 "data_offset": 0, 00:11:29.701 "data_size": 65536 00:11:29.701 } 00:11:29.701 ] 00:11:29.701 }' 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.701 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.960 [2024-11-26 19:52:20.735215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:29.960 [2024-11-26 19:52:20.735316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.960 [2024-11-26 19:52:20.785151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.960 [2024-11-26 19:52:20.829198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:29.960 [2024-11-26 19:52:20.829250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.960 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.219 BaseBdev2 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.219 [ 00:11:30.219 { 00:11:30.219 "name": "BaseBdev2", 00:11:30.219 "aliases": [ 00:11:30.219 "af16a657-910e-4348-b995-3c60d2b7acc4" 00:11:30.219 ], 00:11:30.219 "product_name": "Malloc disk", 00:11:30.219 "block_size": 512, 00:11:30.219 "num_blocks": 65536, 00:11:30.219 "uuid": "af16a657-910e-4348-b995-3c60d2b7acc4", 00:11:30.219 "assigned_rate_limits": { 00:11:30.219 "rw_ios_per_sec": 0, 00:11:30.219 "rw_mbytes_per_sec": 0, 00:11:30.219 "r_mbytes_per_sec": 0, 00:11:30.219 "w_mbytes_per_sec": 0 00:11:30.219 }, 00:11:30.219 "claimed": false, 00:11:30.219 "zoned": false, 00:11:30.219 "supported_io_types": { 00:11:30.219 "read": true, 00:11:30.219 "write": true, 00:11:30.219 "unmap": true, 00:11:30.219 "flush": true, 00:11:30.219 "reset": true, 00:11:30.219 "nvme_admin": false, 00:11:30.219 "nvme_io": false, 00:11:30.219 "nvme_io_md": false, 00:11:30.219 "write_zeroes": true, 00:11:30.219 "zcopy": true, 00:11:30.219 "get_zone_info": false, 00:11:30.219 "zone_management": false, 00:11:30.219 "zone_append": false, 00:11:30.219 "compare": false, 00:11:30.219 "compare_and_write": false, 00:11:30.219 "abort": true, 00:11:30.219 "seek_hole": false, 00:11:30.219 "seek_data": false, 00:11:30.219 "copy": true, 00:11:30.219 "nvme_iov_md": false 00:11:30.219 }, 00:11:30.219 "memory_domains": [ 00:11:30.219 { 00:11:30.219 "dma_device_id": "system", 00:11:30.219 "dma_device_type": 1 00:11:30.219 }, 00:11:30.219 { 00:11:30.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.219 "dma_device_type": 2 00:11:30.219 } 00:11:30.219 ], 00:11:30.219 "driver_specific": {} 00:11:30.219 } 00:11:30.219 ] 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.219 BaseBdev3 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.219 19:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.219 [ 00:11:30.219 { 00:11:30.219 "name": "BaseBdev3", 00:11:30.219 "aliases": [ 00:11:30.219 "e15f2fa9-365c-4e68-ab41-a764f6339b88" 00:11:30.219 ], 00:11:30.219 "product_name": "Malloc disk", 00:11:30.219 "block_size": 512, 00:11:30.220 "num_blocks": 65536, 00:11:30.220 "uuid": "e15f2fa9-365c-4e68-ab41-a764f6339b88", 00:11:30.220 "assigned_rate_limits": { 00:11:30.220 "rw_ios_per_sec": 0, 00:11:30.220 "rw_mbytes_per_sec": 0, 00:11:30.220 "r_mbytes_per_sec": 0, 00:11:30.220 "w_mbytes_per_sec": 0 00:11:30.220 }, 00:11:30.220 "claimed": false, 00:11:30.220 "zoned": false, 00:11:30.220 "supported_io_types": { 00:11:30.220 "read": true, 00:11:30.220 "write": true, 00:11:30.220 "unmap": true, 00:11:30.220 "flush": true, 00:11:30.220 "reset": true, 00:11:30.220 "nvme_admin": false, 00:11:30.220 "nvme_io": false, 00:11:30.220 "nvme_io_md": false, 00:11:30.220 "write_zeroes": true, 00:11:30.220 "zcopy": true, 00:11:30.220 "get_zone_info": false, 00:11:30.220 "zone_management": false, 00:11:30.220 "zone_append": false, 00:11:30.220 "compare": false, 00:11:30.220 "compare_and_write": false, 00:11:30.220 "abort": true, 00:11:30.220 "seek_hole": false, 00:11:30.220 "seek_data": false, 00:11:30.220 "copy": true, 00:11:30.220 "nvme_iov_md": false 00:11:30.220 }, 00:11:30.220 "memory_domains": [ 00:11:30.220 { 00:11:30.220 "dma_device_id": "system", 00:11:30.220 "dma_device_type": 1 00:11:30.220 }, 00:11:30.220 { 00:11:30.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.220 "dma_device_type": 2 00:11:30.220 } 00:11:30.220 ], 00:11:30.220 "driver_specific": {} 00:11:30.220 } 00:11:30.220 ] 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.220 [2024-11-26 19:52:21.010025] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:30.220 [2024-11-26 19:52:21.010070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:30.220 [2024-11-26 19:52:21.010089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.220 [2024-11-26 19:52:21.011710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.220 "name": "Existed_Raid", 00:11:30.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.220 "strip_size_kb": 64, 00:11:30.220 "state": "configuring", 00:11:30.220 "raid_level": "raid5f", 00:11:30.220 "superblock": false, 00:11:30.220 "num_base_bdevs": 3, 00:11:30.220 "num_base_bdevs_discovered": 2, 00:11:30.220 "num_base_bdevs_operational": 3, 00:11:30.220 "base_bdevs_list": [ 00:11:30.220 { 00:11:30.220 "name": "BaseBdev1", 00:11:30.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.220 "is_configured": false, 00:11:30.220 "data_offset": 0, 00:11:30.220 "data_size": 0 00:11:30.220 }, 00:11:30.220 { 00:11:30.220 "name": "BaseBdev2", 00:11:30.220 "uuid": "af16a657-910e-4348-b995-3c60d2b7acc4", 00:11:30.220 "is_configured": true, 00:11:30.220 "data_offset": 0, 00:11:30.220 "data_size": 65536 00:11:30.220 }, 00:11:30.220 { 00:11:30.220 "name": "BaseBdev3", 00:11:30.220 "uuid": "e15f2fa9-365c-4e68-ab41-a764f6339b88", 00:11:30.220 "is_configured": true, 00:11:30.220 "data_offset": 0, 00:11:30.220 "data_size": 65536 00:11:30.220 } 00:11:30.220 ] 00:11:30.220 }' 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.220 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.479 [2024-11-26 19:52:21.318100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.479 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.479 "name": "Existed_Raid", 00:11:30.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.479 "strip_size_kb": 64, 00:11:30.479 "state": "configuring", 00:11:30.479 "raid_level": "raid5f", 00:11:30.479 "superblock": false, 00:11:30.479 "num_base_bdevs": 3, 00:11:30.479 "num_base_bdevs_discovered": 1, 00:11:30.479 "num_base_bdevs_operational": 3, 00:11:30.479 "base_bdevs_list": [ 00:11:30.479 { 00:11:30.479 "name": "BaseBdev1", 00:11:30.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.479 "is_configured": false, 00:11:30.479 "data_offset": 0, 00:11:30.479 "data_size": 0 00:11:30.479 }, 00:11:30.479 { 00:11:30.479 "name": null, 00:11:30.479 "uuid": "af16a657-910e-4348-b995-3c60d2b7acc4", 00:11:30.479 "is_configured": false, 00:11:30.479 "data_offset": 0, 00:11:30.480 "data_size": 65536 00:11:30.480 }, 00:11:30.480 { 00:11:30.480 "name": "BaseBdev3", 00:11:30.480 "uuid": "e15f2fa9-365c-4e68-ab41-a764f6339b88", 00:11:30.480 "is_configured": true, 00:11:30.480 "data_offset": 0, 00:11:30.480 "data_size": 65536 00:11:30.480 } 00:11:30.480 ] 00:11:30.480 }' 00:11:30.480 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.480 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.739 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.739 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.739 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.739 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:30.739 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.739 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:30.739 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:30.739 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.739 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.997 [2024-11-26 19:52:21.678700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.997 BaseBdev1 00:11:30.997 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.997 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:30.997 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:30.997 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.997 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:30.997 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.997 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.997 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.997 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.997 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.997 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.997 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:30.997 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.997 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.997 [ 00:11:30.997 { 00:11:30.997 "name": "BaseBdev1", 00:11:30.997 "aliases": [ 00:11:30.997 "c1334ac0-199d-47ba-9eb0-6da71acf969d" 00:11:30.997 ], 00:11:30.997 "product_name": "Malloc disk", 00:11:30.997 "block_size": 512, 00:11:30.997 "num_blocks": 65536, 00:11:30.997 "uuid": "c1334ac0-199d-47ba-9eb0-6da71acf969d", 00:11:30.997 "assigned_rate_limits": { 00:11:30.997 "rw_ios_per_sec": 0, 00:11:30.997 "rw_mbytes_per_sec": 0, 00:11:30.997 "r_mbytes_per_sec": 0, 00:11:30.997 "w_mbytes_per_sec": 0 00:11:30.997 }, 00:11:30.997 "claimed": true, 00:11:30.997 "claim_type": "exclusive_write", 00:11:30.997 "zoned": false, 00:11:30.997 "supported_io_types": { 00:11:30.997 "read": true, 00:11:30.997 "write": true, 00:11:30.997 "unmap": true, 00:11:30.997 "flush": true, 00:11:30.997 "reset": true, 00:11:30.997 "nvme_admin": false, 00:11:30.997 "nvme_io": false, 00:11:30.997 "nvme_io_md": false, 00:11:30.997 "write_zeroes": true, 00:11:30.997 "zcopy": true, 00:11:30.997 "get_zone_info": false, 00:11:30.997 "zone_management": false, 00:11:30.997 "zone_append": false, 00:11:30.997 "compare": false, 00:11:30.997 "compare_and_write": false, 00:11:30.997 "abort": true, 00:11:30.997 "seek_hole": false, 00:11:30.998 "seek_data": false, 00:11:30.998 "copy": true, 00:11:30.998 "nvme_iov_md": false 00:11:30.998 }, 00:11:30.998 "memory_domains": [ 00:11:30.998 { 00:11:30.998 "dma_device_id": "system", 00:11:30.998 "dma_device_type": 1 00:11:30.998 }, 00:11:30.998 { 00:11:30.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.998 "dma_device_type": 2 00:11:30.998 } 00:11:30.998 ], 00:11:30.998 "driver_specific": {} 00:11:30.998 } 00:11:30.998 ] 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.998 "name": "Existed_Raid", 00:11:30.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.998 "strip_size_kb": 64, 00:11:30.998 "state": "configuring", 00:11:30.998 "raid_level": "raid5f", 00:11:30.998 "superblock": false, 00:11:30.998 "num_base_bdevs": 3, 00:11:30.998 "num_base_bdevs_discovered": 2, 00:11:30.998 "num_base_bdevs_operational": 3, 00:11:30.998 "base_bdevs_list": [ 00:11:30.998 { 00:11:30.998 "name": "BaseBdev1", 00:11:30.998 "uuid": "c1334ac0-199d-47ba-9eb0-6da71acf969d", 00:11:30.998 "is_configured": true, 00:11:30.998 "data_offset": 0, 00:11:30.998 "data_size": 65536 00:11:30.998 }, 00:11:30.998 { 00:11:30.998 "name": null, 00:11:30.998 "uuid": "af16a657-910e-4348-b995-3c60d2b7acc4", 00:11:30.998 "is_configured": false, 00:11:30.998 "data_offset": 0, 00:11:30.998 "data_size": 65536 00:11:30.998 }, 00:11:30.998 { 00:11:30.998 "name": "BaseBdev3", 00:11:30.998 "uuid": "e15f2fa9-365c-4e68-ab41-a764f6339b88", 00:11:30.998 "is_configured": true, 00:11:30.998 "data_offset": 0, 00:11:30.998 "data_size": 65536 00:11:30.998 } 00:11:30.998 ] 00:11:30.998 }' 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.998 19:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.256 [2024-11-26 19:52:22.050803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.256 "name": "Existed_Raid", 00:11:31.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.256 "strip_size_kb": 64, 00:11:31.256 "state": "configuring", 00:11:31.256 "raid_level": "raid5f", 00:11:31.256 "superblock": false, 00:11:31.256 "num_base_bdevs": 3, 00:11:31.256 "num_base_bdevs_discovered": 1, 00:11:31.256 "num_base_bdevs_operational": 3, 00:11:31.256 "base_bdevs_list": [ 00:11:31.256 { 00:11:31.256 "name": "BaseBdev1", 00:11:31.256 "uuid": "c1334ac0-199d-47ba-9eb0-6da71acf969d", 00:11:31.256 "is_configured": true, 00:11:31.256 "data_offset": 0, 00:11:31.256 "data_size": 65536 00:11:31.256 }, 00:11:31.256 { 00:11:31.256 "name": null, 00:11:31.256 "uuid": "af16a657-910e-4348-b995-3c60d2b7acc4", 00:11:31.256 "is_configured": false, 00:11:31.256 "data_offset": 0, 00:11:31.256 "data_size": 65536 00:11:31.256 }, 00:11:31.256 { 00:11:31.256 "name": null, 00:11:31.256 "uuid": "e15f2fa9-365c-4e68-ab41-a764f6339b88", 00:11:31.256 "is_configured": false, 00:11:31.256 "data_offset": 0, 00:11:31.256 "data_size": 65536 00:11:31.256 } 00:11:31.256 ] 00:11:31.256 }' 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.256 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.514 [2024-11-26 19:52:22.426903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.514 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.515 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.515 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.515 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.515 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.515 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.772 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.772 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.772 "name": "Existed_Raid", 00:11:31.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.772 "strip_size_kb": 64, 00:11:31.772 "state": "configuring", 00:11:31.772 "raid_level": "raid5f", 00:11:31.772 "superblock": false, 00:11:31.772 "num_base_bdevs": 3, 00:11:31.772 "num_base_bdevs_discovered": 2, 00:11:31.772 "num_base_bdevs_operational": 3, 00:11:31.772 "base_bdevs_list": [ 00:11:31.772 { 00:11:31.772 "name": "BaseBdev1", 00:11:31.772 "uuid": "c1334ac0-199d-47ba-9eb0-6da71acf969d", 00:11:31.772 "is_configured": true, 00:11:31.772 "data_offset": 0, 00:11:31.772 "data_size": 65536 00:11:31.772 }, 00:11:31.772 { 00:11:31.772 "name": null, 00:11:31.772 "uuid": "af16a657-910e-4348-b995-3c60d2b7acc4", 00:11:31.772 "is_configured": false, 00:11:31.772 "data_offset": 0, 00:11:31.772 "data_size": 65536 00:11:31.772 }, 00:11:31.772 { 00:11:31.772 "name": "BaseBdev3", 00:11:31.772 "uuid": "e15f2fa9-365c-4e68-ab41-a764f6339b88", 00:11:31.772 "is_configured": true, 00:11:31.772 "data_offset": 0, 00:11:31.772 "data_size": 65536 00:11:31.772 } 00:11:31.772 ] 00:11:31.772 }' 00:11:31.772 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.772 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.030 [2024-11-26 19:52:22.779215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.030 "name": "Existed_Raid", 00:11:32.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.030 "strip_size_kb": 64, 00:11:32.030 "state": "configuring", 00:11:32.030 "raid_level": "raid5f", 00:11:32.030 "superblock": false, 00:11:32.030 "num_base_bdevs": 3, 00:11:32.030 "num_base_bdevs_discovered": 1, 00:11:32.030 "num_base_bdevs_operational": 3, 00:11:32.030 "base_bdevs_list": [ 00:11:32.030 { 00:11:32.030 "name": null, 00:11:32.030 "uuid": "c1334ac0-199d-47ba-9eb0-6da71acf969d", 00:11:32.030 "is_configured": false, 00:11:32.030 "data_offset": 0, 00:11:32.030 "data_size": 65536 00:11:32.030 }, 00:11:32.030 { 00:11:32.030 "name": null, 00:11:32.030 "uuid": "af16a657-910e-4348-b995-3c60d2b7acc4", 00:11:32.030 "is_configured": false, 00:11:32.030 "data_offset": 0, 00:11:32.030 "data_size": 65536 00:11:32.030 }, 00:11:32.030 { 00:11:32.030 "name": "BaseBdev3", 00:11:32.030 "uuid": "e15f2fa9-365c-4e68-ab41-a764f6339b88", 00:11:32.030 "is_configured": true, 00:11:32.030 "data_offset": 0, 00:11:32.030 "data_size": 65536 00:11:32.030 } 00:11:32.030 ] 00:11:32.030 }' 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.030 19:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.289 [2024-11-26 19:52:23.184458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.289 "name": "Existed_Raid", 00:11:32.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.289 "strip_size_kb": 64, 00:11:32.289 "state": "configuring", 00:11:32.289 "raid_level": "raid5f", 00:11:32.289 "superblock": false, 00:11:32.289 "num_base_bdevs": 3, 00:11:32.289 "num_base_bdevs_discovered": 2, 00:11:32.289 "num_base_bdevs_operational": 3, 00:11:32.289 "base_bdevs_list": [ 00:11:32.289 { 00:11:32.289 "name": null, 00:11:32.289 "uuid": "c1334ac0-199d-47ba-9eb0-6da71acf969d", 00:11:32.289 "is_configured": false, 00:11:32.289 "data_offset": 0, 00:11:32.289 "data_size": 65536 00:11:32.289 }, 00:11:32.289 { 00:11:32.289 "name": "BaseBdev2", 00:11:32.289 "uuid": "af16a657-910e-4348-b995-3c60d2b7acc4", 00:11:32.289 "is_configured": true, 00:11:32.289 "data_offset": 0, 00:11:32.289 "data_size": 65536 00:11:32.289 }, 00:11:32.289 { 00:11:32.289 "name": "BaseBdev3", 00:11:32.289 "uuid": "e15f2fa9-365c-4e68-ab41-a764f6339b88", 00:11:32.289 "is_configured": true, 00:11:32.289 "data_offset": 0, 00:11:32.289 "data_size": 65536 00:11:32.289 } 00:11:32.289 ] 00:11:32.289 }' 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.289 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.854 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.854 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c1334ac0-199d-47ba-9eb0-6da71acf969d 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.855 [2024-11-26 19:52:23.584669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:32.855 [2024-11-26 19:52:23.584710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:32.855 [2024-11-26 19:52:23.584718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:32.855 [2024-11-26 19:52:23.584927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:32.855 [2024-11-26 19:52:23.587946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:32.855 [2024-11-26 19:52:23.587967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:32.855 [2024-11-26 19:52:23.588165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.855 NewBaseBdev 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.855 [ 00:11:32.855 { 00:11:32.855 "name": "NewBaseBdev", 00:11:32.855 "aliases": [ 00:11:32.855 "c1334ac0-199d-47ba-9eb0-6da71acf969d" 00:11:32.855 ], 00:11:32.855 "product_name": "Malloc disk", 00:11:32.855 "block_size": 512, 00:11:32.855 "num_blocks": 65536, 00:11:32.855 "uuid": "c1334ac0-199d-47ba-9eb0-6da71acf969d", 00:11:32.855 "assigned_rate_limits": { 00:11:32.855 "rw_ios_per_sec": 0, 00:11:32.855 "rw_mbytes_per_sec": 0, 00:11:32.855 "r_mbytes_per_sec": 0, 00:11:32.855 "w_mbytes_per_sec": 0 00:11:32.855 }, 00:11:32.855 "claimed": true, 00:11:32.855 "claim_type": "exclusive_write", 00:11:32.855 "zoned": false, 00:11:32.855 "supported_io_types": { 00:11:32.855 "read": true, 00:11:32.855 "write": true, 00:11:32.855 "unmap": true, 00:11:32.855 "flush": true, 00:11:32.855 "reset": true, 00:11:32.855 "nvme_admin": false, 00:11:32.855 "nvme_io": false, 00:11:32.855 "nvme_io_md": false, 00:11:32.855 "write_zeroes": true, 00:11:32.855 "zcopy": true, 00:11:32.855 "get_zone_info": false, 00:11:32.855 "zone_management": false, 00:11:32.855 "zone_append": false, 00:11:32.855 "compare": false, 00:11:32.855 "compare_and_write": false, 00:11:32.855 "abort": true, 00:11:32.855 "seek_hole": false, 00:11:32.855 "seek_data": false, 00:11:32.855 "copy": true, 00:11:32.855 "nvme_iov_md": false 00:11:32.855 }, 00:11:32.855 "memory_domains": [ 00:11:32.855 { 00:11:32.855 "dma_device_id": "system", 00:11:32.855 "dma_device_type": 1 00:11:32.855 }, 00:11:32.855 { 00:11:32.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.855 "dma_device_type": 2 00:11:32.855 } 00:11:32.855 ], 00:11:32.855 "driver_specific": {} 00:11:32.855 } 00:11:32.855 ] 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.855 "name": "Existed_Raid", 00:11:32.855 "uuid": "2bdf9413-6fbc-4764-8809-e3068a24a2fa", 00:11:32.855 "strip_size_kb": 64, 00:11:32.855 "state": "online", 00:11:32.855 "raid_level": "raid5f", 00:11:32.855 "superblock": false, 00:11:32.855 "num_base_bdevs": 3, 00:11:32.855 "num_base_bdevs_discovered": 3, 00:11:32.855 "num_base_bdevs_operational": 3, 00:11:32.855 "base_bdevs_list": [ 00:11:32.855 { 00:11:32.855 "name": "NewBaseBdev", 00:11:32.855 "uuid": "c1334ac0-199d-47ba-9eb0-6da71acf969d", 00:11:32.855 "is_configured": true, 00:11:32.855 "data_offset": 0, 00:11:32.855 "data_size": 65536 00:11:32.855 }, 00:11:32.855 { 00:11:32.855 "name": "BaseBdev2", 00:11:32.855 "uuid": "af16a657-910e-4348-b995-3c60d2b7acc4", 00:11:32.855 "is_configured": true, 00:11:32.855 "data_offset": 0, 00:11:32.855 "data_size": 65536 00:11:32.855 }, 00:11:32.855 { 00:11:32.855 "name": "BaseBdev3", 00:11:32.855 "uuid": "e15f2fa9-365c-4e68-ab41-a764f6339b88", 00:11:32.855 "is_configured": true, 00:11:32.855 "data_offset": 0, 00:11:32.855 "data_size": 65536 00:11:32.855 } 00:11:32.855 ] 00:11:32.855 }' 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.855 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.114 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:33.114 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:33.114 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:33.114 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:33.114 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:33.114 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:33.114 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:33.114 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.114 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:33.114 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.114 [2024-11-26 19:52:23.939837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.114 19:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.114 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:33.114 "name": "Existed_Raid", 00:11:33.114 "aliases": [ 00:11:33.114 "2bdf9413-6fbc-4764-8809-e3068a24a2fa" 00:11:33.114 ], 00:11:33.114 "product_name": "Raid Volume", 00:11:33.114 "block_size": 512, 00:11:33.114 "num_blocks": 131072, 00:11:33.114 "uuid": "2bdf9413-6fbc-4764-8809-e3068a24a2fa", 00:11:33.114 "assigned_rate_limits": { 00:11:33.114 "rw_ios_per_sec": 0, 00:11:33.114 "rw_mbytes_per_sec": 0, 00:11:33.114 "r_mbytes_per_sec": 0, 00:11:33.114 "w_mbytes_per_sec": 0 00:11:33.114 }, 00:11:33.114 "claimed": false, 00:11:33.114 "zoned": false, 00:11:33.114 "supported_io_types": { 00:11:33.114 "read": true, 00:11:33.114 "write": true, 00:11:33.114 "unmap": false, 00:11:33.114 "flush": false, 00:11:33.114 "reset": true, 00:11:33.114 "nvme_admin": false, 00:11:33.114 "nvme_io": false, 00:11:33.114 "nvme_io_md": false, 00:11:33.114 "write_zeroes": true, 00:11:33.114 "zcopy": false, 00:11:33.114 "get_zone_info": false, 00:11:33.114 "zone_management": false, 00:11:33.114 "zone_append": false, 00:11:33.114 "compare": false, 00:11:33.114 "compare_and_write": false, 00:11:33.114 "abort": false, 00:11:33.114 "seek_hole": false, 00:11:33.114 "seek_data": false, 00:11:33.114 "copy": false, 00:11:33.114 "nvme_iov_md": false 00:11:33.114 }, 00:11:33.114 "driver_specific": { 00:11:33.114 "raid": { 00:11:33.114 "uuid": "2bdf9413-6fbc-4764-8809-e3068a24a2fa", 00:11:33.114 "strip_size_kb": 64, 00:11:33.114 "state": "online", 00:11:33.114 "raid_level": "raid5f", 00:11:33.114 "superblock": false, 00:11:33.114 "num_base_bdevs": 3, 00:11:33.114 "num_base_bdevs_discovered": 3, 00:11:33.114 "num_base_bdevs_operational": 3, 00:11:33.114 "base_bdevs_list": [ 00:11:33.114 { 00:11:33.114 "name": "NewBaseBdev", 00:11:33.114 "uuid": "c1334ac0-199d-47ba-9eb0-6da71acf969d", 00:11:33.114 "is_configured": true, 00:11:33.114 "data_offset": 0, 00:11:33.114 "data_size": 65536 00:11:33.114 }, 00:11:33.114 { 00:11:33.114 "name": "BaseBdev2", 00:11:33.114 "uuid": "af16a657-910e-4348-b995-3c60d2b7acc4", 00:11:33.114 "is_configured": true, 00:11:33.114 "data_offset": 0, 00:11:33.114 "data_size": 65536 00:11:33.114 }, 00:11:33.114 { 00:11:33.114 "name": "BaseBdev3", 00:11:33.114 "uuid": "e15f2fa9-365c-4e68-ab41-a764f6339b88", 00:11:33.114 "is_configured": true, 00:11:33.114 "data_offset": 0, 00:11:33.114 "data_size": 65536 00:11:33.114 } 00:11:33.114 ] 00:11:33.114 } 00:11:33.114 } 00:11:33.114 }' 00:11:33.114 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:33.114 19:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:33.114 BaseBdev2 00:11:33.114 BaseBdev3' 00:11:33.114 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.114 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:33.114 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.114 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:33.114 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.114 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.114 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.114 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.373 [2024-11-26 19:52:24.127707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.373 [2024-11-26 19:52:24.127742] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.373 [2024-11-26 19:52:24.127820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.373 [2024-11-26 19:52:24.128071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.373 [2024-11-26 19:52:24.128089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77660 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 77660 ']' 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 77660 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77660 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.373 killing process with pid 77660 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77660' 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 77660 00:11:33.373 [2024-11-26 19:52:24.159367] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.373 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 77660 00:11:33.630 [2024-11-26 19:52:24.314782] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:34.197 00:11:34.197 real 0m7.464s 00:11:34.197 user 0m11.952s 00:11:34.197 sys 0m1.344s 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.197 ************************************ 00:11:34.197 END TEST raid5f_state_function_test 00:11:34.197 ************************************ 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.197 19:52:24 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:11:34.197 19:52:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:34.197 19:52:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.197 19:52:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.197 ************************************ 00:11:34.197 START TEST raid5f_state_function_test_sb 00:11:34.197 ************************************ 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78249 00:11:34.197 Process raid pid: 78249 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78249' 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78249 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78249 ']' 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.197 19:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.197 [2024-11-26 19:52:25.048954] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:11:34.197 [2024-11-26 19:52:25.049087] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.455 [2024-11-26 19:52:25.208653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.455 [2024-11-26 19:52:25.325091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.711 [2024-11-26 19:52:25.473824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.711 [2024-11-26 19:52:25.473862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.966 19:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.966 19:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:34.966 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:34.966 19:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.966 19:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.224 [2024-11-26 19:52:25.902681] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.224 [2024-11-26 19:52:25.902737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.224 [2024-11-26 19:52:25.902748] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.224 [2024-11-26 19:52:25.902758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.224 [2024-11-26 19:52:25.902764] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.224 [2024-11-26 19:52:25.902773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.224 "name": "Existed_Raid", 00:11:35.224 "uuid": "ad418c61-7628-4602-980a-41d1229cd516", 00:11:35.224 "strip_size_kb": 64, 00:11:35.224 "state": "configuring", 00:11:35.224 "raid_level": "raid5f", 00:11:35.224 "superblock": true, 00:11:35.224 "num_base_bdevs": 3, 00:11:35.224 "num_base_bdevs_discovered": 0, 00:11:35.224 "num_base_bdevs_operational": 3, 00:11:35.224 "base_bdevs_list": [ 00:11:35.224 { 00:11:35.224 "name": "BaseBdev1", 00:11:35.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.224 "is_configured": false, 00:11:35.224 "data_offset": 0, 00:11:35.224 "data_size": 0 00:11:35.224 }, 00:11:35.224 { 00:11:35.224 "name": "BaseBdev2", 00:11:35.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.224 "is_configured": false, 00:11:35.224 "data_offset": 0, 00:11:35.224 "data_size": 0 00:11:35.224 }, 00:11:35.224 { 00:11:35.224 "name": "BaseBdev3", 00:11:35.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.224 "is_configured": false, 00:11:35.224 "data_offset": 0, 00:11:35.224 "data_size": 0 00:11:35.224 } 00:11:35.224 ] 00:11:35.224 }' 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.224 19:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.483 [2024-11-26 19:52:26.198684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.483 [2024-11-26 19:52:26.198727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.483 [2024-11-26 19:52:26.206690] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.483 [2024-11-26 19:52:26.206736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.483 [2024-11-26 19:52:26.206745] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.483 [2024-11-26 19:52:26.206755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.483 [2024-11-26 19:52:26.206762] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.483 [2024-11-26 19:52:26.206771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.483 [2024-11-26 19:52:26.241814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.483 BaseBdev1 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.483 [ 00:11:35.483 { 00:11:35.483 "name": "BaseBdev1", 00:11:35.483 "aliases": [ 00:11:35.483 "5e3a4105-484a-4f22-bdae-fd449640489f" 00:11:35.483 ], 00:11:35.483 "product_name": "Malloc disk", 00:11:35.483 "block_size": 512, 00:11:35.483 "num_blocks": 65536, 00:11:35.483 "uuid": "5e3a4105-484a-4f22-bdae-fd449640489f", 00:11:35.483 "assigned_rate_limits": { 00:11:35.483 "rw_ios_per_sec": 0, 00:11:35.483 "rw_mbytes_per_sec": 0, 00:11:35.483 "r_mbytes_per_sec": 0, 00:11:35.483 "w_mbytes_per_sec": 0 00:11:35.483 }, 00:11:35.483 "claimed": true, 00:11:35.483 "claim_type": "exclusive_write", 00:11:35.483 "zoned": false, 00:11:35.483 "supported_io_types": { 00:11:35.483 "read": true, 00:11:35.483 "write": true, 00:11:35.483 "unmap": true, 00:11:35.483 "flush": true, 00:11:35.483 "reset": true, 00:11:35.483 "nvme_admin": false, 00:11:35.483 "nvme_io": false, 00:11:35.483 "nvme_io_md": false, 00:11:35.483 "write_zeroes": true, 00:11:35.483 "zcopy": true, 00:11:35.483 "get_zone_info": false, 00:11:35.483 "zone_management": false, 00:11:35.483 "zone_append": false, 00:11:35.483 "compare": false, 00:11:35.483 "compare_and_write": false, 00:11:35.483 "abort": true, 00:11:35.483 "seek_hole": false, 00:11:35.483 "seek_data": false, 00:11:35.483 "copy": true, 00:11:35.483 "nvme_iov_md": false 00:11:35.483 }, 00:11:35.483 "memory_domains": [ 00:11:35.483 { 00:11:35.483 "dma_device_id": "system", 00:11:35.483 "dma_device_type": 1 00:11:35.483 }, 00:11:35.483 { 00:11:35.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.483 "dma_device_type": 2 00:11:35.483 } 00:11:35.483 ], 00:11:35.483 "driver_specific": {} 00:11:35.483 } 00:11:35.483 ] 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.483 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.484 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.484 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.484 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.484 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.484 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.484 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.484 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.484 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.484 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.484 "name": "Existed_Raid", 00:11:35.484 "uuid": "3081ffd6-3538-48d9-8b34-f1274e71ec7c", 00:11:35.484 "strip_size_kb": 64, 00:11:35.484 "state": "configuring", 00:11:35.484 "raid_level": "raid5f", 00:11:35.484 "superblock": true, 00:11:35.484 "num_base_bdevs": 3, 00:11:35.484 "num_base_bdevs_discovered": 1, 00:11:35.484 "num_base_bdevs_operational": 3, 00:11:35.484 "base_bdevs_list": [ 00:11:35.484 { 00:11:35.484 "name": "BaseBdev1", 00:11:35.484 "uuid": "5e3a4105-484a-4f22-bdae-fd449640489f", 00:11:35.484 "is_configured": true, 00:11:35.484 "data_offset": 2048, 00:11:35.484 "data_size": 63488 00:11:35.484 }, 00:11:35.484 { 00:11:35.484 "name": "BaseBdev2", 00:11:35.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.484 "is_configured": false, 00:11:35.484 "data_offset": 0, 00:11:35.484 "data_size": 0 00:11:35.484 }, 00:11:35.484 { 00:11:35.484 "name": "BaseBdev3", 00:11:35.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.484 "is_configured": false, 00:11:35.484 "data_offset": 0, 00:11:35.484 "data_size": 0 00:11:35.484 } 00:11:35.484 ] 00:11:35.484 }' 00:11:35.484 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.484 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.742 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.742 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.742 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.742 [2024-11-26 19:52:26.581957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.742 [2024-11-26 19:52:26.582016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:35.742 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.743 [2024-11-26 19:52:26.590009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.743 [2024-11-26 19:52:26.592046] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.743 [2024-11-26 19:52:26.592094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.743 [2024-11-26 19:52:26.592104] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.743 [2024-11-26 19:52:26.592113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.743 "name": "Existed_Raid", 00:11:35.743 "uuid": "bb29c633-191d-4a41-954f-ce7fe03f9488", 00:11:35.743 "strip_size_kb": 64, 00:11:35.743 "state": "configuring", 00:11:35.743 "raid_level": "raid5f", 00:11:35.743 "superblock": true, 00:11:35.743 "num_base_bdevs": 3, 00:11:35.743 "num_base_bdevs_discovered": 1, 00:11:35.743 "num_base_bdevs_operational": 3, 00:11:35.743 "base_bdevs_list": [ 00:11:35.743 { 00:11:35.743 "name": "BaseBdev1", 00:11:35.743 "uuid": "5e3a4105-484a-4f22-bdae-fd449640489f", 00:11:35.743 "is_configured": true, 00:11:35.743 "data_offset": 2048, 00:11:35.743 "data_size": 63488 00:11:35.743 }, 00:11:35.743 { 00:11:35.743 "name": "BaseBdev2", 00:11:35.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.743 "is_configured": false, 00:11:35.743 "data_offset": 0, 00:11:35.743 "data_size": 0 00:11:35.743 }, 00:11:35.743 { 00:11:35.743 "name": "BaseBdev3", 00:11:35.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.743 "is_configured": false, 00:11:35.743 "data_offset": 0, 00:11:35.743 "data_size": 0 00:11:35.743 } 00:11:35.743 ] 00:11:35.743 }' 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.743 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.000 [2024-11-26 19:52:26.922910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.000 BaseBdev2 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:36.000 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.258 [ 00:11:36.258 { 00:11:36.258 "name": "BaseBdev2", 00:11:36.258 "aliases": [ 00:11:36.258 "47b923d4-5922-4e7f-9d91-2acf74bb7e8c" 00:11:36.258 ], 00:11:36.258 "product_name": "Malloc disk", 00:11:36.258 "block_size": 512, 00:11:36.258 "num_blocks": 65536, 00:11:36.258 "uuid": "47b923d4-5922-4e7f-9d91-2acf74bb7e8c", 00:11:36.258 "assigned_rate_limits": { 00:11:36.258 "rw_ios_per_sec": 0, 00:11:36.258 "rw_mbytes_per_sec": 0, 00:11:36.258 "r_mbytes_per_sec": 0, 00:11:36.258 "w_mbytes_per_sec": 0 00:11:36.258 }, 00:11:36.258 "claimed": true, 00:11:36.258 "claim_type": "exclusive_write", 00:11:36.258 "zoned": false, 00:11:36.258 "supported_io_types": { 00:11:36.258 "read": true, 00:11:36.258 "write": true, 00:11:36.258 "unmap": true, 00:11:36.258 "flush": true, 00:11:36.258 "reset": true, 00:11:36.258 "nvme_admin": false, 00:11:36.258 "nvme_io": false, 00:11:36.258 "nvme_io_md": false, 00:11:36.258 "write_zeroes": true, 00:11:36.258 "zcopy": true, 00:11:36.258 "get_zone_info": false, 00:11:36.258 "zone_management": false, 00:11:36.258 "zone_append": false, 00:11:36.258 "compare": false, 00:11:36.258 "compare_and_write": false, 00:11:36.258 "abort": true, 00:11:36.258 "seek_hole": false, 00:11:36.258 "seek_data": false, 00:11:36.258 "copy": true, 00:11:36.258 "nvme_iov_md": false 00:11:36.258 }, 00:11:36.258 "memory_domains": [ 00:11:36.258 { 00:11:36.258 "dma_device_id": "system", 00:11:36.258 "dma_device_type": 1 00:11:36.258 }, 00:11:36.258 { 00:11:36.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.258 "dma_device_type": 2 00:11:36.258 } 00:11:36.258 ], 00:11:36.258 "driver_specific": {} 00:11:36.258 } 00:11:36.258 ] 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.258 "name": "Existed_Raid", 00:11:36.258 "uuid": "bb29c633-191d-4a41-954f-ce7fe03f9488", 00:11:36.258 "strip_size_kb": 64, 00:11:36.258 "state": "configuring", 00:11:36.258 "raid_level": "raid5f", 00:11:36.258 "superblock": true, 00:11:36.258 "num_base_bdevs": 3, 00:11:36.258 "num_base_bdevs_discovered": 2, 00:11:36.258 "num_base_bdevs_operational": 3, 00:11:36.258 "base_bdevs_list": [ 00:11:36.258 { 00:11:36.258 "name": "BaseBdev1", 00:11:36.258 "uuid": "5e3a4105-484a-4f22-bdae-fd449640489f", 00:11:36.258 "is_configured": true, 00:11:36.258 "data_offset": 2048, 00:11:36.258 "data_size": 63488 00:11:36.258 }, 00:11:36.258 { 00:11:36.258 "name": "BaseBdev2", 00:11:36.258 "uuid": "47b923d4-5922-4e7f-9d91-2acf74bb7e8c", 00:11:36.258 "is_configured": true, 00:11:36.258 "data_offset": 2048, 00:11:36.258 "data_size": 63488 00:11:36.258 }, 00:11:36.258 { 00:11:36.258 "name": "BaseBdev3", 00:11:36.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.258 "is_configured": false, 00:11:36.258 "data_offset": 0, 00:11:36.258 "data_size": 0 00:11:36.258 } 00:11:36.258 ] 00:11:36.258 }' 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.258 19:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.516 [2024-11-26 19:52:27.312336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.516 [2024-11-26 19:52:27.312786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:36.516 [2024-11-26 19:52:27.312891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:36.516 BaseBdev3 00:11:36.516 [2024-11-26 19:52:27.313201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.516 [2024-11-26 19:52:27.317138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:36.516 [2024-11-26 19:52:27.317158] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:36.516 [2024-11-26 19:52:27.317316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.516 [ 00:11:36.516 { 00:11:36.516 "name": "BaseBdev3", 00:11:36.516 "aliases": [ 00:11:36.516 "1383019b-6687-46fa-b750-6784df86d9ba" 00:11:36.516 ], 00:11:36.516 "product_name": "Malloc disk", 00:11:36.516 "block_size": 512, 00:11:36.516 "num_blocks": 65536, 00:11:36.516 "uuid": "1383019b-6687-46fa-b750-6784df86d9ba", 00:11:36.516 "assigned_rate_limits": { 00:11:36.516 "rw_ios_per_sec": 0, 00:11:36.516 "rw_mbytes_per_sec": 0, 00:11:36.516 "r_mbytes_per_sec": 0, 00:11:36.516 "w_mbytes_per_sec": 0 00:11:36.516 }, 00:11:36.516 "claimed": true, 00:11:36.516 "claim_type": "exclusive_write", 00:11:36.516 "zoned": false, 00:11:36.516 "supported_io_types": { 00:11:36.516 "read": true, 00:11:36.516 "write": true, 00:11:36.516 "unmap": true, 00:11:36.516 "flush": true, 00:11:36.516 "reset": true, 00:11:36.516 "nvme_admin": false, 00:11:36.516 "nvme_io": false, 00:11:36.516 "nvme_io_md": false, 00:11:36.516 "write_zeroes": true, 00:11:36.516 "zcopy": true, 00:11:36.516 "get_zone_info": false, 00:11:36.516 "zone_management": false, 00:11:36.516 "zone_append": false, 00:11:36.516 "compare": false, 00:11:36.516 "compare_and_write": false, 00:11:36.516 "abort": true, 00:11:36.516 "seek_hole": false, 00:11:36.516 "seek_data": false, 00:11:36.516 "copy": true, 00:11:36.516 "nvme_iov_md": false 00:11:36.516 }, 00:11:36.516 "memory_domains": [ 00:11:36.516 { 00:11:36.516 "dma_device_id": "system", 00:11:36.516 "dma_device_type": 1 00:11:36.516 }, 00:11:36.516 { 00:11:36.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.516 "dma_device_type": 2 00:11:36.516 } 00:11:36.516 ], 00:11:36.516 "driver_specific": {} 00:11:36.516 } 00:11:36.516 ] 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.516 "name": "Existed_Raid", 00:11:36.516 "uuid": "bb29c633-191d-4a41-954f-ce7fe03f9488", 00:11:36.516 "strip_size_kb": 64, 00:11:36.516 "state": "online", 00:11:36.516 "raid_level": "raid5f", 00:11:36.516 "superblock": true, 00:11:36.516 "num_base_bdevs": 3, 00:11:36.516 "num_base_bdevs_discovered": 3, 00:11:36.516 "num_base_bdevs_operational": 3, 00:11:36.516 "base_bdevs_list": [ 00:11:36.516 { 00:11:36.516 "name": "BaseBdev1", 00:11:36.516 "uuid": "5e3a4105-484a-4f22-bdae-fd449640489f", 00:11:36.516 "is_configured": true, 00:11:36.516 "data_offset": 2048, 00:11:36.516 "data_size": 63488 00:11:36.516 }, 00:11:36.516 { 00:11:36.516 "name": "BaseBdev2", 00:11:36.516 "uuid": "47b923d4-5922-4e7f-9d91-2acf74bb7e8c", 00:11:36.516 "is_configured": true, 00:11:36.516 "data_offset": 2048, 00:11:36.516 "data_size": 63488 00:11:36.516 }, 00:11:36.516 { 00:11:36.516 "name": "BaseBdev3", 00:11:36.516 "uuid": "1383019b-6687-46fa-b750-6784df86d9ba", 00:11:36.516 "is_configured": true, 00:11:36.516 "data_offset": 2048, 00:11:36.516 "data_size": 63488 00:11:36.516 } 00:11:36.516 ] 00:11:36.516 }' 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.516 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.773 [2024-11-26 19:52:27.637914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.773 "name": "Existed_Raid", 00:11:36.773 "aliases": [ 00:11:36.773 "bb29c633-191d-4a41-954f-ce7fe03f9488" 00:11:36.773 ], 00:11:36.773 "product_name": "Raid Volume", 00:11:36.773 "block_size": 512, 00:11:36.773 "num_blocks": 126976, 00:11:36.773 "uuid": "bb29c633-191d-4a41-954f-ce7fe03f9488", 00:11:36.773 "assigned_rate_limits": { 00:11:36.773 "rw_ios_per_sec": 0, 00:11:36.773 "rw_mbytes_per_sec": 0, 00:11:36.773 "r_mbytes_per_sec": 0, 00:11:36.773 "w_mbytes_per_sec": 0 00:11:36.773 }, 00:11:36.773 "claimed": false, 00:11:36.773 "zoned": false, 00:11:36.773 "supported_io_types": { 00:11:36.773 "read": true, 00:11:36.773 "write": true, 00:11:36.773 "unmap": false, 00:11:36.773 "flush": false, 00:11:36.773 "reset": true, 00:11:36.773 "nvme_admin": false, 00:11:36.773 "nvme_io": false, 00:11:36.773 "nvme_io_md": false, 00:11:36.773 "write_zeroes": true, 00:11:36.773 "zcopy": false, 00:11:36.773 "get_zone_info": false, 00:11:36.773 "zone_management": false, 00:11:36.773 "zone_append": false, 00:11:36.773 "compare": false, 00:11:36.773 "compare_and_write": false, 00:11:36.773 "abort": false, 00:11:36.773 "seek_hole": false, 00:11:36.773 "seek_data": false, 00:11:36.773 "copy": false, 00:11:36.773 "nvme_iov_md": false 00:11:36.773 }, 00:11:36.773 "driver_specific": { 00:11:36.773 "raid": { 00:11:36.773 "uuid": "bb29c633-191d-4a41-954f-ce7fe03f9488", 00:11:36.773 "strip_size_kb": 64, 00:11:36.773 "state": "online", 00:11:36.773 "raid_level": "raid5f", 00:11:36.773 "superblock": true, 00:11:36.773 "num_base_bdevs": 3, 00:11:36.773 "num_base_bdevs_discovered": 3, 00:11:36.773 "num_base_bdevs_operational": 3, 00:11:36.773 "base_bdevs_list": [ 00:11:36.773 { 00:11:36.773 "name": "BaseBdev1", 00:11:36.773 "uuid": "5e3a4105-484a-4f22-bdae-fd449640489f", 00:11:36.773 "is_configured": true, 00:11:36.773 "data_offset": 2048, 00:11:36.773 "data_size": 63488 00:11:36.773 }, 00:11:36.773 { 00:11:36.773 "name": "BaseBdev2", 00:11:36.773 "uuid": "47b923d4-5922-4e7f-9d91-2acf74bb7e8c", 00:11:36.773 "is_configured": true, 00:11:36.773 "data_offset": 2048, 00:11:36.773 "data_size": 63488 00:11:36.773 }, 00:11:36.773 { 00:11:36.773 "name": "BaseBdev3", 00:11:36.773 "uuid": "1383019b-6687-46fa-b750-6784df86d9ba", 00:11:36.773 "is_configured": true, 00:11:36.773 "data_offset": 2048, 00:11:36.773 "data_size": 63488 00:11:36.773 } 00:11:36.773 ] 00:11:36.773 } 00:11:36.773 } 00:11:36.773 }' 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:36.773 BaseBdev2 00:11:36.773 BaseBdev3' 00:11:36.773 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.031 [2024-11-26 19:52:27.817748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.031 "name": "Existed_Raid", 00:11:37.031 "uuid": "bb29c633-191d-4a41-954f-ce7fe03f9488", 00:11:37.031 "strip_size_kb": 64, 00:11:37.031 "state": "online", 00:11:37.031 "raid_level": "raid5f", 00:11:37.031 "superblock": true, 00:11:37.031 "num_base_bdevs": 3, 00:11:37.031 "num_base_bdevs_discovered": 2, 00:11:37.031 "num_base_bdevs_operational": 2, 00:11:37.031 "base_bdevs_list": [ 00:11:37.031 { 00:11:37.031 "name": null, 00:11:37.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.031 "is_configured": false, 00:11:37.031 "data_offset": 0, 00:11:37.031 "data_size": 63488 00:11:37.031 }, 00:11:37.031 { 00:11:37.031 "name": "BaseBdev2", 00:11:37.031 "uuid": "47b923d4-5922-4e7f-9d91-2acf74bb7e8c", 00:11:37.031 "is_configured": true, 00:11:37.031 "data_offset": 2048, 00:11:37.031 "data_size": 63488 00:11:37.031 }, 00:11:37.031 { 00:11:37.031 "name": "BaseBdev3", 00:11:37.031 "uuid": "1383019b-6687-46fa-b750-6784df86d9ba", 00:11:37.031 "is_configured": true, 00:11:37.031 "data_offset": 2048, 00:11:37.031 "data_size": 63488 00:11:37.031 } 00:11:37.031 ] 00:11:37.031 }' 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.031 19:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.289 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:37.289 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.289 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.289 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.289 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.289 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.289 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.289 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.289 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.289 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:37.289 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.289 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.289 [2024-11-26 19:52:28.196985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.289 [2024-11-26 19:52:28.197143] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.547 [2024-11-26 19:52:28.259448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.547 [2024-11-26 19:52:28.295481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.547 [2024-11-26 19:52:28.295530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.547 BaseBdev2 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:37.547 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.548 [ 00:11:37.548 { 00:11:37.548 "name": "BaseBdev2", 00:11:37.548 "aliases": [ 00:11:37.548 "19ae6bda-2022-4130-8061-73607ea19a99" 00:11:37.548 ], 00:11:37.548 "product_name": "Malloc disk", 00:11:37.548 "block_size": 512, 00:11:37.548 "num_blocks": 65536, 00:11:37.548 "uuid": "19ae6bda-2022-4130-8061-73607ea19a99", 00:11:37.548 "assigned_rate_limits": { 00:11:37.548 "rw_ios_per_sec": 0, 00:11:37.548 "rw_mbytes_per_sec": 0, 00:11:37.548 "r_mbytes_per_sec": 0, 00:11:37.548 "w_mbytes_per_sec": 0 00:11:37.548 }, 00:11:37.548 "claimed": false, 00:11:37.548 "zoned": false, 00:11:37.548 "supported_io_types": { 00:11:37.548 "read": true, 00:11:37.548 "write": true, 00:11:37.548 "unmap": true, 00:11:37.548 "flush": true, 00:11:37.548 "reset": true, 00:11:37.548 "nvme_admin": false, 00:11:37.548 "nvme_io": false, 00:11:37.548 "nvme_io_md": false, 00:11:37.548 "write_zeroes": true, 00:11:37.548 "zcopy": true, 00:11:37.548 "get_zone_info": false, 00:11:37.548 "zone_management": false, 00:11:37.548 "zone_append": false, 00:11:37.548 "compare": false, 00:11:37.548 "compare_and_write": false, 00:11:37.548 "abort": true, 00:11:37.548 "seek_hole": false, 00:11:37.548 "seek_data": false, 00:11:37.548 "copy": true, 00:11:37.548 "nvme_iov_md": false 00:11:37.548 }, 00:11:37.548 "memory_domains": [ 00:11:37.548 { 00:11:37.548 "dma_device_id": "system", 00:11:37.548 "dma_device_type": 1 00:11:37.548 }, 00:11:37.548 { 00:11:37.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.548 "dma_device_type": 2 00:11:37.548 } 00:11:37.548 ], 00:11:37.548 "driver_specific": {} 00:11:37.548 } 00:11:37.548 ] 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.548 BaseBdev3 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.548 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.808 [ 00:11:37.808 { 00:11:37.808 "name": "BaseBdev3", 00:11:37.808 "aliases": [ 00:11:37.808 "59fe036a-2d3f-4cba-b35d-55d0f983ffed" 00:11:37.808 ], 00:11:37.808 "product_name": "Malloc disk", 00:11:37.808 "block_size": 512, 00:11:37.808 "num_blocks": 65536, 00:11:37.808 "uuid": "59fe036a-2d3f-4cba-b35d-55d0f983ffed", 00:11:37.808 "assigned_rate_limits": { 00:11:37.808 "rw_ios_per_sec": 0, 00:11:37.808 "rw_mbytes_per_sec": 0, 00:11:37.808 "r_mbytes_per_sec": 0, 00:11:37.808 "w_mbytes_per_sec": 0 00:11:37.808 }, 00:11:37.808 "claimed": false, 00:11:37.808 "zoned": false, 00:11:37.808 "supported_io_types": { 00:11:37.808 "read": true, 00:11:37.808 "write": true, 00:11:37.808 "unmap": true, 00:11:37.808 "flush": true, 00:11:37.808 "reset": true, 00:11:37.808 "nvme_admin": false, 00:11:37.808 "nvme_io": false, 00:11:37.808 "nvme_io_md": false, 00:11:37.808 "write_zeroes": true, 00:11:37.808 "zcopy": true, 00:11:37.808 "get_zone_info": false, 00:11:37.808 "zone_management": false, 00:11:37.808 "zone_append": false, 00:11:37.808 "compare": false, 00:11:37.808 "compare_and_write": false, 00:11:37.808 "abort": true, 00:11:37.808 "seek_hole": false, 00:11:37.808 "seek_data": false, 00:11:37.808 "copy": true, 00:11:37.808 "nvme_iov_md": false 00:11:37.808 }, 00:11:37.808 "memory_domains": [ 00:11:37.808 { 00:11:37.808 "dma_device_id": "system", 00:11:37.808 "dma_device_type": 1 00:11:37.808 }, 00:11:37.808 { 00:11:37.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.808 "dma_device_type": 2 00:11:37.808 } 00:11:37.808 ], 00:11:37.808 "driver_specific": {} 00:11:37.808 } 00:11:37.808 ] 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.808 [2024-11-26 19:52:28.511052] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.808 [2024-11-26 19:52:28.511223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.808 [2024-11-26 19:52:28.511324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.808 [2024-11-26 19:52:28.513368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.808 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.808 "name": "Existed_Raid", 00:11:37.808 "uuid": "0c51df83-6c48-4b80-ad85-167c1875fa62", 00:11:37.808 "strip_size_kb": 64, 00:11:37.808 "state": "configuring", 00:11:37.808 "raid_level": "raid5f", 00:11:37.808 "superblock": true, 00:11:37.808 "num_base_bdevs": 3, 00:11:37.808 "num_base_bdevs_discovered": 2, 00:11:37.808 "num_base_bdevs_operational": 3, 00:11:37.808 "base_bdevs_list": [ 00:11:37.808 { 00:11:37.808 "name": "BaseBdev1", 00:11:37.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.808 "is_configured": false, 00:11:37.808 "data_offset": 0, 00:11:37.808 "data_size": 0 00:11:37.808 }, 00:11:37.808 { 00:11:37.808 "name": "BaseBdev2", 00:11:37.808 "uuid": "19ae6bda-2022-4130-8061-73607ea19a99", 00:11:37.808 "is_configured": true, 00:11:37.808 "data_offset": 2048, 00:11:37.808 "data_size": 63488 00:11:37.808 }, 00:11:37.808 { 00:11:37.808 "name": "BaseBdev3", 00:11:37.808 "uuid": "59fe036a-2d3f-4cba-b35d-55d0f983ffed", 00:11:37.808 "is_configured": true, 00:11:37.809 "data_offset": 2048, 00:11:37.809 "data_size": 63488 00:11:37.809 } 00:11:37.809 ] 00:11:37.809 }' 00:11:37.809 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.809 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.068 [2024-11-26 19:52:28.807142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.068 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.068 "name": "Existed_Raid", 00:11:38.068 "uuid": "0c51df83-6c48-4b80-ad85-167c1875fa62", 00:11:38.068 "strip_size_kb": 64, 00:11:38.068 "state": "configuring", 00:11:38.068 "raid_level": "raid5f", 00:11:38.068 "superblock": true, 00:11:38.068 "num_base_bdevs": 3, 00:11:38.068 "num_base_bdevs_discovered": 1, 00:11:38.068 "num_base_bdevs_operational": 3, 00:11:38.068 "base_bdevs_list": [ 00:11:38.068 { 00:11:38.069 "name": "BaseBdev1", 00:11:38.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.069 "is_configured": false, 00:11:38.069 "data_offset": 0, 00:11:38.069 "data_size": 0 00:11:38.069 }, 00:11:38.069 { 00:11:38.069 "name": null, 00:11:38.069 "uuid": "19ae6bda-2022-4130-8061-73607ea19a99", 00:11:38.069 "is_configured": false, 00:11:38.069 "data_offset": 0, 00:11:38.069 "data_size": 63488 00:11:38.069 }, 00:11:38.069 { 00:11:38.069 "name": "BaseBdev3", 00:11:38.069 "uuid": "59fe036a-2d3f-4cba-b35d-55d0f983ffed", 00:11:38.069 "is_configured": true, 00:11:38.069 "data_offset": 2048, 00:11:38.069 "data_size": 63488 00:11:38.069 } 00:11:38.069 ] 00:11:38.069 }' 00:11:38.069 19:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.069 19:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.328 [2024-11-26 19:52:29.188208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.328 BaseBdev1 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.328 [ 00:11:38.328 { 00:11:38.328 "name": "BaseBdev1", 00:11:38.328 "aliases": [ 00:11:38.328 "abd22dd0-79aa-4cfd-98bd-86230a366fff" 00:11:38.328 ], 00:11:38.328 "product_name": "Malloc disk", 00:11:38.328 "block_size": 512, 00:11:38.328 "num_blocks": 65536, 00:11:38.328 "uuid": "abd22dd0-79aa-4cfd-98bd-86230a366fff", 00:11:38.328 "assigned_rate_limits": { 00:11:38.328 "rw_ios_per_sec": 0, 00:11:38.328 "rw_mbytes_per_sec": 0, 00:11:38.328 "r_mbytes_per_sec": 0, 00:11:38.328 "w_mbytes_per_sec": 0 00:11:38.328 }, 00:11:38.328 "claimed": true, 00:11:38.328 "claim_type": "exclusive_write", 00:11:38.328 "zoned": false, 00:11:38.328 "supported_io_types": { 00:11:38.328 "read": true, 00:11:38.328 "write": true, 00:11:38.328 "unmap": true, 00:11:38.328 "flush": true, 00:11:38.328 "reset": true, 00:11:38.328 "nvme_admin": false, 00:11:38.328 "nvme_io": false, 00:11:38.328 "nvme_io_md": false, 00:11:38.328 "write_zeroes": true, 00:11:38.328 "zcopy": true, 00:11:38.328 "get_zone_info": false, 00:11:38.328 "zone_management": false, 00:11:38.328 "zone_append": false, 00:11:38.328 "compare": false, 00:11:38.328 "compare_and_write": false, 00:11:38.328 "abort": true, 00:11:38.328 "seek_hole": false, 00:11:38.328 "seek_data": false, 00:11:38.328 "copy": true, 00:11:38.328 "nvme_iov_md": false 00:11:38.328 }, 00:11:38.328 "memory_domains": [ 00:11:38.328 { 00:11:38.328 "dma_device_id": "system", 00:11:38.328 "dma_device_type": 1 00:11:38.328 }, 00:11:38.328 { 00:11:38.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.328 "dma_device_type": 2 00:11:38.328 } 00:11:38.328 ], 00:11:38.328 "driver_specific": {} 00:11:38.328 } 00:11:38.328 ] 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.328 "name": "Existed_Raid", 00:11:38.328 "uuid": "0c51df83-6c48-4b80-ad85-167c1875fa62", 00:11:38.328 "strip_size_kb": 64, 00:11:38.328 "state": "configuring", 00:11:38.328 "raid_level": "raid5f", 00:11:38.328 "superblock": true, 00:11:38.328 "num_base_bdevs": 3, 00:11:38.328 "num_base_bdevs_discovered": 2, 00:11:38.328 "num_base_bdevs_operational": 3, 00:11:38.328 "base_bdevs_list": [ 00:11:38.328 { 00:11:38.328 "name": "BaseBdev1", 00:11:38.328 "uuid": "abd22dd0-79aa-4cfd-98bd-86230a366fff", 00:11:38.328 "is_configured": true, 00:11:38.328 "data_offset": 2048, 00:11:38.328 "data_size": 63488 00:11:38.328 }, 00:11:38.328 { 00:11:38.328 "name": null, 00:11:38.328 "uuid": "19ae6bda-2022-4130-8061-73607ea19a99", 00:11:38.328 "is_configured": false, 00:11:38.328 "data_offset": 0, 00:11:38.328 "data_size": 63488 00:11:38.328 }, 00:11:38.328 { 00:11:38.328 "name": "BaseBdev3", 00:11:38.328 "uuid": "59fe036a-2d3f-4cba-b35d-55d0f983ffed", 00:11:38.328 "is_configured": true, 00:11:38.328 "data_offset": 2048, 00:11:38.328 "data_size": 63488 00:11:38.328 } 00:11:38.328 ] 00:11:38.328 }' 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.328 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.900 [2024-11-26 19:52:29.556320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.900 "name": "Existed_Raid", 00:11:38.900 "uuid": "0c51df83-6c48-4b80-ad85-167c1875fa62", 00:11:38.900 "strip_size_kb": 64, 00:11:38.900 "state": "configuring", 00:11:38.900 "raid_level": "raid5f", 00:11:38.900 "superblock": true, 00:11:38.900 "num_base_bdevs": 3, 00:11:38.900 "num_base_bdevs_discovered": 1, 00:11:38.900 "num_base_bdevs_operational": 3, 00:11:38.900 "base_bdevs_list": [ 00:11:38.900 { 00:11:38.900 "name": "BaseBdev1", 00:11:38.900 "uuid": "abd22dd0-79aa-4cfd-98bd-86230a366fff", 00:11:38.900 "is_configured": true, 00:11:38.900 "data_offset": 2048, 00:11:38.900 "data_size": 63488 00:11:38.900 }, 00:11:38.900 { 00:11:38.900 "name": null, 00:11:38.900 "uuid": "19ae6bda-2022-4130-8061-73607ea19a99", 00:11:38.900 "is_configured": false, 00:11:38.900 "data_offset": 0, 00:11:38.900 "data_size": 63488 00:11:38.900 }, 00:11:38.900 { 00:11:38.900 "name": null, 00:11:38.900 "uuid": "59fe036a-2d3f-4cba-b35d-55d0f983ffed", 00:11:38.900 "is_configured": false, 00:11:38.900 "data_offset": 0, 00:11:38.900 "data_size": 63488 00:11:38.900 } 00:11:38.900 ] 00:11:38.900 }' 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.900 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.159 [2024-11-26 19:52:29.924456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.159 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.159 "name": "Existed_Raid", 00:11:39.159 "uuid": "0c51df83-6c48-4b80-ad85-167c1875fa62", 00:11:39.159 "strip_size_kb": 64, 00:11:39.159 "state": "configuring", 00:11:39.159 "raid_level": "raid5f", 00:11:39.160 "superblock": true, 00:11:39.160 "num_base_bdevs": 3, 00:11:39.160 "num_base_bdevs_discovered": 2, 00:11:39.160 "num_base_bdevs_operational": 3, 00:11:39.160 "base_bdevs_list": [ 00:11:39.160 { 00:11:39.160 "name": "BaseBdev1", 00:11:39.160 "uuid": "abd22dd0-79aa-4cfd-98bd-86230a366fff", 00:11:39.160 "is_configured": true, 00:11:39.160 "data_offset": 2048, 00:11:39.160 "data_size": 63488 00:11:39.160 }, 00:11:39.160 { 00:11:39.160 "name": null, 00:11:39.160 "uuid": "19ae6bda-2022-4130-8061-73607ea19a99", 00:11:39.160 "is_configured": false, 00:11:39.160 "data_offset": 0, 00:11:39.160 "data_size": 63488 00:11:39.160 }, 00:11:39.160 { 00:11:39.160 "name": "BaseBdev3", 00:11:39.160 "uuid": "59fe036a-2d3f-4cba-b35d-55d0f983ffed", 00:11:39.160 "is_configured": true, 00:11:39.160 "data_offset": 2048, 00:11:39.160 "data_size": 63488 00:11:39.160 } 00:11:39.160 ] 00:11:39.160 }' 00:11:39.160 19:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.160 19:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.418 [2024-11-26 19:52:30.300512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.418 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.677 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.677 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.677 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.677 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.677 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.677 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.677 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.677 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.677 "name": "Existed_Raid", 00:11:39.677 "uuid": "0c51df83-6c48-4b80-ad85-167c1875fa62", 00:11:39.677 "strip_size_kb": 64, 00:11:39.677 "state": "configuring", 00:11:39.677 "raid_level": "raid5f", 00:11:39.677 "superblock": true, 00:11:39.677 "num_base_bdevs": 3, 00:11:39.677 "num_base_bdevs_discovered": 1, 00:11:39.677 "num_base_bdevs_operational": 3, 00:11:39.677 "base_bdevs_list": [ 00:11:39.677 { 00:11:39.677 "name": null, 00:11:39.677 "uuid": "abd22dd0-79aa-4cfd-98bd-86230a366fff", 00:11:39.677 "is_configured": false, 00:11:39.677 "data_offset": 0, 00:11:39.677 "data_size": 63488 00:11:39.677 }, 00:11:39.677 { 00:11:39.677 "name": null, 00:11:39.677 "uuid": "19ae6bda-2022-4130-8061-73607ea19a99", 00:11:39.677 "is_configured": false, 00:11:39.677 "data_offset": 0, 00:11:39.677 "data_size": 63488 00:11:39.677 }, 00:11:39.677 { 00:11:39.677 "name": "BaseBdev3", 00:11:39.677 "uuid": "59fe036a-2d3f-4cba-b35d-55d0f983ffed", 00:11:39.677 "is_configured": true, 00:11:39.677 "data_offset": 2048, 00:11:39.677 "data_size": 63488 00:11:39.677 } 00:11:39.677 ] 00:11:39.677 }' 00:11:39.677 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.677 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.936 [2024-11-26 19:52:30.698655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.936 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.936 "name": "Existed_Raid", 00:11:39.936 "uuid": "0c51df83-6c48-4b80-ad85-167c1875fa62", 00:11:39.936 "strip_size_kb": 64, 00:11:39.936 "state": "configuring", 00:11:39.936 "raid_level": "raid5f", 00:11:39.936 "superblock": true, 00:11:39.936 "num_base_bdevs": 3, 00:11:39.936 "num_base_bdevs_discovered": 2, 00:11:39.936 "num_base_bdevs_operational": 3, 00:11:39.936 "base_bdevs_list": [ 00:11:39.936 { 00:11:39.937 "name": null, 00:11:39.937 "uuid": "abd22dd0-79aa-4cfd-98bd-86230a366fff", 00:11:39.937 "is_configured": false, 00:11:39.937 "data_offset": 0, 00:11:39.937 "data_size": 63488 00:11:39.937 }, 00:11:39.937 { 00:11:39.937 "name": "BaseBdev2", 00:11:39.937 "uuid": "19ae6bda-2022-4130-8061-73607ea19a99", 00:11:39.937 "is_configured": true, 00:11:39.937 "data_offset": 2048, 00:11:39.937 "data_size": 63488 00:11:39.937 }, 00:11:39.937 { 00:11:39.937 "name": "BaseBdev3", 00:11:39.937 "uuid": "59fe036a-2d3f-4cba-b35d-55d0f983ffed", 00:11:39.937 "is_configured": true, 00:11:39.937 "data_offset": 2048, 00:11:39.937 "data_size": 63488 00:11:39.937 } 00:11:39.937 ] 00:11:39.937 }' 00:11:39.937 19:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.937 19:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.195 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.195 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.195 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.195 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:40.195 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.195 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:40.195 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.195 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.195 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.195 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:40.195 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.195 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u abd22dd0-79aa-4cfd-98bd-86230a366fff 00:11:40.195 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.195 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.453 [2024-11-26 19:52:31.132374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:40.453 [2024-11-26 19:52:31.132575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:40.453 [2024-11-26 19:52:31.132589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:40.453 NewBaseBdev 00:11:40.453 [2024-11-26 19:52:31.132820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:40.453 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.453 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:40.453 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:40.453 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.453 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.453 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.453 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.453 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.453 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.453 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.453 [2024-11-26 19:52:31.135825] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:40.453 [2024-11-26 19:52:31.135842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:40.453 [2024-11-26 19:52:31.135959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.453 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.453 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:40.453 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.453 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.453 [ 00:11:40.453 { 00:11:40.453 "name": "NewBaseBdev", 00:11:40.453 "aliases": [ 00:11:40.453 "abd22dd0-79aa-4cfd-98bd-86230a366fff" 00:11:40.453 ], 00:11:40.453 "product_name": "Malloc disk", 00:11:40.453 "block_size": 512, 00:11:40.453 "num_blocks": 65536, 00:11:40.453 "uuid": "abd22dd0-79aa-4cfd-98bd-86230a366fff", 00:11:40.453 "assigned_rate_limits": { 00:11:40.453 "rw_ios_per_sec": 0, 00:11:40.453 "rw_mbytes_per_sec": 0, 00:11:40.453 "r_mbytes_per_sec": 0, 00:11:40.453 "w_mbytes_per_sec": 0 00:11:40.453 }, 00:11:40.453 "claimed": true, 00:11:40.454 "claim_type": "exclusive_write", 00:11:40.454 "zoned": false, 00:11:40.454 "supported_io_types": { 00:11:40.454 "read": true, 00:11:40.454 "write": true, 00:11:40.454 "unmap": true, 00:11:40.454 "flush": true, 00:11:40.454 "reset": true, 00:11:40.454 "nvme_admin": false, 00:11:40.454 "nvme_io": false, 00:11:40.454 "nvme_io_md": false, 00:11:40.454 "write_zeroes": true, 00:11:40.454 "zcopy": true, 00:11:40.454 "get_zone_info": false, 00:11:40.454 "zone_management": false, 00:11:40.454 "zone_append": false, 00:11:40.454 "compare": false, 00:11:40.454 "compare_and_write": false, 00:11:40.454 "abort": true, 00:11:40.454 "seek_hole": false, 00:11:40.454 "seek_data": false, 00:11:40.454 "copy": true, 00:11:40.454 "nvme_iov_md": false 00:11:40.454 }, 00:11:40.454 "memory_domains": [ 00:11:40.454 { 00:11:40.454 "dma_device_id": "system", 00:11:40.454 "dma_device_type": 1 00:11:40.454 }, 00:11:40.454 { 00:11:40.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.454 "dma_device_type": 2 00:11:40.454 } 00:11:40.454 ], 00:11:40.454 "driver_specific": {} 00:11:40.454 } 00:11:40.454 ] 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.454 "name": "Existed_Raid", 00:11:40.454 "uuid": "0c51df83-6c48-4b80-ad85-167c1875fa62", 00:11:40.454 "strip_size_kb": 64, 00:11:40.454 "state": "online", 00:11:40.454 "raid_level": "raid5f", 00:11:40.454 "superblock": true, 00:11:40.454 "num_base_bdevs": 3, 00:11:40.454 "num_base_bdevs_discovered": 3, 00:11:40.454 "num_base_bdevs_operational": 3, 00:11:40.454 "base_bdevs_list": [ 00:11:40.454 { 00:11:40.454 "name": "NewBaseBdev", 00:11:40.454 "uuid": "abd22dd0-79aa-4cfd-98bd-86230a366fff", 00:11:40.454 "is_configured": true, 00:11:40.454 "data_offset": 2048, 00:11:40.454 "data_size": 63488 00:11:40.454 }, 00:11:40.454 { 00:11:40.454 "name": "BaseBdev2", 00:11:40.454 "uuid": "19ae6bda-2022-4130-8061-73607ea19a99", 00:11:40.454 "is_configured": true, 00:11:40.454 "data_offset": 2048, 00:11:40.454 "data_size": 63488 00:11:40.454 }, 00:11:40.454 { 00:11:40.454 "name": "BaseBdev3", 00:11:40.454 "uuid": "59fe036a-2d3f-4cba-b35d-55d0f983ffed", 00:11:40.454 "is_configured": true, 00:11:40.454 "data_offset": 2048, 00:11:40.454 "data_size": 63488 00:11:40.454 } 00:11:40.454 ] 00:11:40.454 }' 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.454 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.713 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:40.713 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:40.713 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:40.713 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:40.713 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:40.713 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:40.713 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:40.713 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.713 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.713 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:40.713 [2024-11-26 19:52:31.479823] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.713 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.713 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:40.713 "name": "Existed_Raid", 00:11:40.713 "aliases": [ 00:11:40.713 "0c51df83-6c48-4b80-ad85-167c1875fa62" 00:11:40.713 ], 00:11:40.713 "product_name": "Raid Volume", 00:11:40.713 "block_size": 512, 00:11:40.713 "num_blocks": 126976, 00:11:40.713 "uuid": "0c51df83-6c48-4b80-ad85-167c1875fa62", 00:11:40.713 "assigned_rate_limits": { 00:11:40.713 "rw_ios_per_sec": 0, 00:11:40.713 "rw_mbytes_per_sec": 0, 00:11:40.713 "r_mbytes_per_sec": 0, 00:11:40.713 "w_mbytes_per_sec": 0 00:11:40.713 }, 00:11:40.713 "claimed": false, 00:11:40.713 "zoned": false, 00:11:40.713 "supported_io_types": { 00:11:40.713 "read": true, 00:11:40.713 "write": true, 00:11:40.713 "unmap": false, 00:11:40.713 "flush": false, 00:11:40.713 "reset": true, 00:11:40.713 "nvme_admin": false, 00:11:40.713 "nvme_io": false, 00:11:40.713 "nvme_io_md": false, 00:11:40.713 "write_zeroes": true, 00:11:40.713 "zcopy": false, 00:11:40.713 "get_zone_info": false, 00:11:40.713 "zone_management": false, 00:11:40.713 "zone_append": false, 00:11:40.713 "compare": false, 00:11:40.713 "compare_and_write": false, 00:11:40.713 "abort": false, 00:11:40.713 "seek_hole": false, 00:11:40.713 "seek_data": false, 00:11:40.713 "copy": false, 00:11:40.713 "nvme_iov_md": false 00:11:40.713 }, 00:11:40.713 "driver_specific": { 00:11:40.713 "raid": { 00:11:40.713 "uuid": "0c51df83-6c48-4b80-ad85-167c1875fa62", 00:11:40.713 "strip_size_kb": 64, 00:11:40.713 "state": "online", 00:11:40.713 "raid_level": "raid5f", 00:11:40.713 "superblock": true, 00:11:40.713 "num_base_bdevs": 3, 00:11:40.713 "num_base_bdevs_discovered": 3, 00:11:40.713 "num_base_bdevs_operational": 3, 00:11:40.713 "base_bdevs_list": [ 00:11:40.713 { 00:11:40.713 "name": "NewBaseBdev", 00:11:40.713 "uuid": "abd22dd0-79aa-4cfd-98bd-86230a366fff", 00:11:40.713 "is_configured": true, 00:11:40.713 "data_offset": 2048, 00:11:40.713 "data_size": 63488 00:11:40.713 }, 00:11:40.713 { 00:11:40.713 "name": "BaseBdev2", 00:11:40.713 "uuid": "19ae6bda-2022-4130-8061-73607ea19a99", 00:11:40.713 "is_configured": true, 00:11:40.713 "data_offset": 2048, 00:11:40.713 "data_size": 63488 00:11:40.713 }, 00:11:40.713 { 00:11:40.713 "name": "BaseBdev3", 00:11:40.713 "uuid": "59fe036a-2d3f-4cba-b35d-55d0f983ffed", 00:11:40.713 "is_configured": true, 00:11:40.713 "data_offset": 2048, 00:11:40.713 "data_size": 63488 00:11:40.713 } 00:11:40.713 ] 00:11:40.713 } 00:11:40.713 } 00:11:40.713 }' 00:11:40.713 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:40.713 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:40.713 BaseBdev2 00:11:40.713 BaseBdev3' 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.714 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.973 [2024-11-26 19:52:31.663628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.973 [2024-11-26 19:52:31.663653] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.973 [2024-11-26 19:52:31.663724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.973 [2024-11-26 19:52:31.663972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.973 [2024-11-26 19:52:31.663983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78249 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78249 ']' 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 78249 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78249 00:11:40.973 killing process with pid 78249 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78249' 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 78249 00:11:40.973 [2024-11-26 19:52:31.687187] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.973 19:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 78249 00:11:40.973 [2024-11-26 19:52:31.851913] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.908 ************************************ 00:11:41.908 END TEST raid5f_state_function_test_sb 00:11:41.908 ************************************ 00:11:41.908 19:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:41.908 00:11:41.908 real 0m7.514s 00:11:41.908 user 0m11.946s 00:11:41.908 sys 0m1.312s 00:11:41.908 19:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.908 19:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.908 19:52:32 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:11:41.908 19:52:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:41.908 19:52:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.908 19:52:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.908 ************************************ 00:11:41.908 START TEST raid5f_superblock_test 00:11:41.908 ************************************ 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:41.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78836 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78836 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 78836 ']' 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.908 19:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:41.908 [2024-11-26 19:52:32.601118] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:11:41.908 [2024-11-26 19:52:32.601440] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78836 ] 00:11:41.908 [2024-11-26 19:52:32.789645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.167 [2024-11-26 19:52:32.909005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.167 [2024-11-26 19:52:33.060979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.167 [2024-11-26 19:52:33.061204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.733 malloc1 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.733 [2024-11-26 19:52:33.494088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:42.733 [2024-11-26 19:52:33.494279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.733 [2024-11-26 19:52:33.494310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:42.733 [2024-11-26 19:52:33.494322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.733 [2024-11-26 19:52:33.496634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.733 [2024-11-26 19:52:33.496667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:42.733 pt1 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:42.733 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.734 malloc2 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.734 [2024-11-26 19:52:33.532667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:42.734 [2024-11-26 19:52:33.532821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.734 [2024-11-26 19:52:33.532852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:42.734 [2024-11-26 19:52:33.532861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.734 [2024-11-26 19:52:33.535097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.734 [2024-11-26 19:52:33.535129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:42.734 pt2 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.734 malloc3 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.734 [2024-11-26 19:52:33.583719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:42.734 [2024-11-26 19:52:33.583767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.734 [2024-11-26 19:52:33.583790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:42.734 [2024-11-26 19:52:33.583799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.734 [2024-11-26 19:52:33.586016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.734 [2024-11-26 19:52:33.586048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:42.734 pt3 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.734 [2024-11-26 19:52:33.591772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:42.734 [2024-11-26 19:52:33.593719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:42.734 [2024-11-26 19:52:33.593783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:42.734 [2024-11-26 19:52:33.593951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:42.734 [2024-11-26 19:52:33.593970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:42.734 [2024-11-26 19:52:33.594218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:42.734 [2024-11-26 19:52:33.598051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:42.734 [2024-11-26 19:52:33.598069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:42.734 [2024-11-26 19:52:33.598248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.734 "name": "raid_bdev1", 00:11:42.734 "uuid": "4caa0ec3-fefe-4096-9365-e3286a5b2c61", 00:11:42.734 "strip_size_kb": 64, 00:11:42.734 "state": "online", 00:11:42.734 "raid_level": "raid5f", 00:11:42.734 "superblock": true, 00:11:42.734 "num_base_bdevs": 3, 00:11:42.734 "num_base_bdevs_discovered": 3, 00:11:42.734 "num_base_bdevs_operational": 3, 00:11:42.734 "base_bdevs_list": [ 00:11:42.734 { 00:11:42.734 "name": "pt1", 00:11:42.734 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:42.734 "is_configured": true, 00:11:42.734 "data_offset": 2048, 00:11:42.734 "data_size": 63488 00:11:42.734 }, 00:11:42.734 { 00:11:42.734 "name": "pt2", 00:11:42.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.734 "is_configured": true, 00:11:42.734 "data_offset": 2048, 00:11:42.734 "data_size": 63488 00:11:42.734 }, 00:11:42.734 { 00:11:42.734 "name": "pt3", 00:11:42.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.734 "is_configured": true, 00:11:42.734 "data_offset": 2048, 00:11:42.734 "data_size": 63488 00:11:42.734 } 00:11:42.734 ] 00:11:42.734 }' 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.734 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.993 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:42.993 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:42.993 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.993 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.993 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.993 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.993 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:42.993 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.993 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.993 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.993 [2024-11-26 19:52:33.918885] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.252 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.252 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:43.252 "name": "raid_bdev1", 00:11:43.252 "aliases": [ 00:11:43.252 "4caa0ec3-fefe-4096-9365-e3286a5b2c61" 00:11:43.252 ], 00:11:43.252 "product_name": "Raid Volume", 00:11:43.252 "block_size": 512, 00:11:43.252 "num_blocks": 126976, 00:11:43.252 "uuid": "4caa0ec3-fefe-4096-9365-e3286a5b2c61", 00:11:43.252 "assigned_rate_limits": { 00:11:43.252 "rw_ios_per_sec": 0, 00:11:43.252 "rw_mbytes_per_sec": 0, 00:11:43.252 "r_mbytes_per_sec": 0, 00:11:43.252 "w_mbytes_per_sec": 0 00:11:43.252 }, 00:11:43.252 "claimed": false, 00:11:43.252 "zoned": false, 00:11:43.252 "supported_io_types": { 00:11:43.252 "read": true, 00:11:43.252 "write": true, 00:11:43.252 "unmap": false, 00:11:43.252 "flush": false, 00:11:43.252 "reset": true, 00:11:43.252 "nvme_admin": false, 00:11:43.252 "nvme_io": false, 00:11:43.252 "nvme_io_md": false, 00:11:43.252 "write_zeroes": true, 00:11:43.252 "zcopy": false, 00:11:43.252 "get_zone_info": false, 00:11:43.252 "zone_management": false, 00:11:43.252 "zone_append": false, 00:11:43.252 "compare": false, 00:11:43.252 "compare_and_write": false, 00:11:43.252 "abort": false, 00:11:43.252 "seek_hole": false, 00:11:43.252 "seek_data": false, 00:11:43.252 "copy": false, 00:11:43.252 "nvme_iov_md": false 00:11:43.252 }, 00:11:43.252 "driver_specific": { 00:11:43.252 "raid": { 00:11:43.252 "uuid": "4caa0ec3-fefe-4096-9365-e3286a5b2c61", 00:11:43.252 "strip_size_kb": 64, 00:11:43.252 "state": "online", 00:11:43.252 "raid_level": "raid5f", 00:11:43.252 "superblock": true, 00:11:43.252 "num_base_bdevs": 3, 00:11:43.252 "num_base_bdevs_discovered": 3, 00:11:43.252 "num_base_bdevs_operational": 3, 00:11:43.252 "base_bdevs_list": [ 00:11:43.252 { 00:11:43.252 "name": "pt1", 00:11:43.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:43.252 "is_configured": true, 00:11:43.252 "data_offset": 2048, 00:11:43.252 "data_size": 63488 00:11:43.252 }, 00:11:43.252 { 00:11:43.252 "name": "pt2", 00:11:43.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.252 "is_configured": true, 00:11:43.252 "data_offset": 2048, 00:11:43.252 "data_size": 63488 00:11:43.252 }, 00:11:43.252 { 00:11:43.252 "name": "pt3", 00:11:43.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.252 "is_configured": true, 00:11:43.252 "data_offset": 2048, 00:11:43.252 "data_size": 63488 00:11:43.252 } 00:11:43.252 ] 00:11:43.252 } 00:11:43.252 } 00:11:43.252 }' 00:11:43.252 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:43.252 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:43.252 pt2 00:11:43.252 pt3' 00:11:43.252 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.252 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:43.252 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.252 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:43.252 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.252 19:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.252 19:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.252 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.253 [2024-11-26 19:52:34.115001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4caa0ec3-fefe-4096-9365-e3286a5b2c61 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4caa0ec3-fefe-4096-9365-e3286a5b2c61 ']' 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.253 [2024-11-26 19:52:34.154668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.253 [2024-11-26 19:52:34.154697] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.253 [2024-11-26 19:52:34.154774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.253 [2024-11-26 19:52:34.154857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.253 [2024-11-26 19:52:34.154867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:43.253 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.510 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.511 [2024-11-26 19:52:34.262767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:43.511 [2024-11-26 19:52:34.264835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:43.511 [2024-11-26 19:52:34.265020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:43.511 [2024-11-26 19:52:34.265081] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:43.511 [2024-11-26 19:52:34.265135] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:43.511 [2024-11-26 19:52:34.265156] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:43.511 [2024-11-26 19:52:34.265174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.511 [2024-11-26 19:52:34.265184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:43.511 request: 00:11:43.511 { 00:11:43.511 "name": "raid_bdev1", 00:11:43.511 "raid_level": "raid5f", 00:11:43.511 "base_bdevs": [ 00:11:43.511 "malloc1", 00:11:43.511 "malloc2", 00:11:43.511 "malloc3" 00:11:43.511 ], 00:11:43.511 "strip_size_kb": 64, 00:11:43.511 "superblock": false, 00:11:43.511 "method": "bdev_raid_create", 00:11:43.511 "req_id": 1 00:11:43.511 } 00:11:43.511 Got JSON-RPC error response 00:11:43.511 response: 00:11:43.511 { 00:11:43.511 "code": -17, 00:11:43.511 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:43.511 } 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.511 [2024-11-26 19:52:34.306700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:43.511 [2024-11-26 19:52:34.306750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.511 [2024-11-26 19:52:34.306771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:43.511 [2024-11-26 19:52:34.306779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.511 [2024-11-26 19:52:34.309156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.511 [2024-11-26 19:52:34.309190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:43.511 [2024-11-26 19:52:34.309272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:43.511 [2024-11-26 19:52:34.309323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:43.511 pt1 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.511 "name": "raid_bdev1", 00:11:43.511 "uuid": "4caa0ec3-fefe-4096-9365-e3286a5b2c61", 00:11:43.511 "strip_size_kb": 64, 00:11:43.511 "state": "configuring", 00:11:43.511 "raid_level": "raid5f", 00:11:43.511 "superblock": true, 00:11:43.511 "num_base_bdevs": 3, 00:11:43.511 "num_base_bdevs_discovered": 1, 00:11:43.511 "num_base_bdevs_operational": 3, 00:11:43.511 "base_bdevs_list": [ 00:11:43.511 { 00:11:43.511 "name": "pt1", 00:11:43.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:43.511 "is_configured": true, 00:11:43.511 "data_offset": 2048, 00:11:43.511 "data_size": 63488 00:11:43.511 }, 00:11:43.511 { 00:11:43.511 "name": null, 00:11:43.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.511 "is_configured": false, 00:11:43.511 "data_offset": 2048, 00:11:43.511 "data_size": 63488 00:11:43.511 }, 00:11:43.511 { 00:11:43.511 "name": null, 00:11:43.511 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.511 "is_configured": false, 00:11:43.511 "data_offset": 2048, 00:11:43.511 "data_size": 63488 00:11:43.511 } 00:11:43.511 ] 00:11:43.511 }' 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.511 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.768 [2024-11-26 19:52:34.610812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:43.768 [2024-11-26 19:52:34.610881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.768 [2024-11-26 19:52:34.610906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:43.768 [2024-11-26 19:52:34.610915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.768 [2024-11-26 19:52:34.611405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.768 [2024-11-26 19:52:34.611468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:43.768 [2024-11-26 19:52:34.611563] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:43.768 [2024-11-26 19:52:34.611591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:43.768 pt2 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.768 [2024-11-26 19:52:34.618797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.768 "name": "raid_bdev1", 00:11:43.768 "uuid": "4caa0ec3-fefe-4096-9365-e3286a5b2c61", 00:11:43.768 "strip_size_kb": 64, 00:11:43.768 "state": "configuring", 00:11:43.768 "raid_level": "raid5f", 00:11:43.768 "superblock": true, 00:11:43.768 "num_base_bdevs": 3, 00:11:43.768 "num_base_bdevs_discovered": 1, 00:11:43.768 "num_base_bdevs_operational": 3, 00:11:43.768 "base_bdevs_list": [ 00:11:43.768 { 00:11:43.768 "name": "pt1", 00:11:43.768 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:43.768 "is_configured": true, 00:11:43.768 "data_offset": 2048, 00:11:43.768 "data_size": 63488 00:11:43.768 }, 00:11:43.768 { 00:11:43.768 "name": null, 00:11:43.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.768 "is_configured": false, 00:11:43.768 "data_offset": 0, 00:11:43.768 "data_size": 63488 00:11:43.768 }, 00:11:43.768 { 00:11:43.768 "name": null, 00:11:43.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.768 "is_configured": false, 00:11:43.768 "data_offset": 2048, 00:11:43.768 "data_size": 63488 00:11:43.768 } 00:11:43.768 ] 00:11:43.768 }' 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.768 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.074 [2024-11-26 19:52:34.966896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:44.074 [2024-11-26 19:52:34.966982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.074 [2024-11-26 19:52:34.967002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:44.074 [2024-11-26 19:52:34.967013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.074 [2024-11-26 19:52:34.967515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.074 [2024-11-26 19:52:34.967546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:44.074 [2024-11-26 19:52:34.967633] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:44.074 [2024-11-26 19:52:34.967658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:44.074 pt2 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.074 [2024-11-26 19:52:34.974876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:44.074 [2024-11-26 19:52:34.974920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.074 [2024-11-26 19:52:34.974941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:44.074 [2024-11-26 19:52:34.974952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.074 [2024-11-26 19:52:34.975330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.074 [2024-11-26 19:52:34.975371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:44.074 [2024-11-26 19:52:34.975431] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:44.074 [2024-11-26 19:52:34.975450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:44.074 [2024-11-26 19:52:34.975576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:44.074 [2024-11-26 19:52:34.975587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:44.074 [2024-11-26 19:52:34.975827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:44.074 [2024-11-26 19:52:34.979455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:44.074 [2024-11-26 19:52:34.979474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:44.074 [2024-11-26 19:52:34.979643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.074 pt3 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.074 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.331 19:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.331 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.331 "name": "raid_bdev1", 00:11:44.331 "uuid": "4caa0ec3-fefe-4096-9365-e3286a5b2c61", 00:11:44.331 "strip_size_kb": 64, 00:11:44.331 "state": "online", 00:11:44.331 "raid_level": "raid5f", 00:11:44.331 "superblock": true, 00:11:44.331 "num_base_bdevs": 3, 00:11:44.331 "num_base_bdevs_discovered": 3, 00:11:44.331 "num_base_bdevs_operational": 3, 00:11:44.331 "base_bdevs_list": [ 00:11:44.331 { 00:11:44.331 "name": "pt1", 00:11:44.331 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:44.331 "is_configured": true, 00:11:44.331 "data_offset": 2048, 00:11:44.331 "data_size": 63488 00:11:44.331 }, 00:11:44.331 { 00:11:44.331 "name": "pt2", 00:11:44.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.331 "is_configured": true, 00:11:44.331 "data_offset": 2048, 00:11:44.331 "data_size": 63488 00:11:44.331 }, 00:11:44.331 { 00:11:44.331 "name": "pt3", 00:11:44.331 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:44.331 "is_configured": true, 00:11:44.331 "data_offset": 2048, 00:11:44.331 "data_size": 63488 00:11:44.331 } 00:11:44.331 ] 00:11:44.331 }' 00:11:44.331 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.331 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.588 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:44.588 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:44.588 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:44.588 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:44.588 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.589 [2024-11-26 19:52:35.296272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:44.589 "name": "raid_bdev1", 00:11:44.589 "aliases": [ 00:11:44.589 "4caa0ec3-fefe-4096-9365-e3286a5b2c61" 00:11:44.589 ], 00:11:44.589 "product_name": "Raid Volume", 00:11:44.589 "block_size": 512, 00:11:44.589 "num_blocks": 126976, 00:11:44.589 "uuid": "4caa0ec3-fefe-4096-9365-e3286a5b2c61", 00:11:44.589 "assigned_rate_limits": { 00:11:44.589 "rw_ios_per_sec": 0, 00:11:44.589 "rw_mbytes_per_sec": 0, 00:11:44.589 "r_mbytes_per_sec": 0, 00:11:44.589 "w_mbytes_per_sec": 0 00:11:44.589 }, 00:11:44.589 "claimed": false, 00:11:44.589 "zoned": false, 00:11:44.589 "supported_io_types": { 00:11:44.589 "read": true, 00:11:44.589 "write": true, 00:11:44.589 "unmap": false, 00:11:44.589 "flush": false, 00:11:44.589 "reset": true, 00:11:44.589 "nvme_admin": false, 00:11:44.589 "nvme_io": false, 00:11:44.589 "nvme_io_md": false, 00:11:44.589 "write_zeroes": true, 00:11:44.589 "zcopy": false, 00:11:44.589 "get_zone_info": false, 00:11:44.589 "zone_management": false, 00:11:44.589 "zone_append": false, 00:11:44.589 "compare": false, 00:11:44.589 "compare_and_write": false, 00:11:44.589 "abort": false, 00:11:44.589 "seek_hole": false, 00:11:44.589 "seek_data": false, 00:11:44.589 "copy": false, 00:11:44.589 "nvme_iov_md": false 00:11:44.589 }, 00:11:44.589 "driver_specific": { 00:11:44.589 "raid": { 00:11:44.589 "uuid": "4caa0ec3-fefe-4096-9365-e3286a5b2c61", 00:11:44.589 "strip_size_kb": 64, 00:11:44.589 "state": "online", 00:11:44.589 "raid_level": "raid5f", 00:11:44.589 "superblock": true, 00:11:44.589 "num_base_bdevs": 3, 00:11:44.589 "num_base_bdevs_discovered": 3, 00:11:44.589 "num_base_bdevs_operational": 3, 00:11:44.589 "base_bdevs_list": [ 00:11:44.589 { 00:11:44.589 "name": "pt1", 00:11:44.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:44.589 "is_configured": true, 00:11:44.589 "data_offset": 2048, 00:11:44.589 "data_size": 63488 00:11:44.589 }, 00:11:44.589 { 00:11:44.589 "name": "pt2", 00:11:44.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.589 "is_configured": true, 00:11:44.589 "data_offset": 2048, 00:11:44.589 "data_size": 63488 00:11:44.589 }, 00:11:44.589 { 00:11:44.589 "name": "pt3", 00:11:44.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:44.589 "is_configured": true, 00:11:44.589 "data_offset": 2048, 00:11:44.589 "data_size": 63488 00:11:44.589 } 00:11:44.589 ] 00:11:44.589 } 00:11:44.589 } 00:11:44.589 }' 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:44.589 pt2 00:11:44.589 pt3' 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:44.589 [2024-11-26 19:52:35.476263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4caa0ec3-fefe-4096-9365-e3286a5b2c61 '!=' 4caa0ec3-fefe-4096-9365-e3286a5b2c61 ']' 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.589 [2024-11-26 19:52:35.508109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.589 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.590 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.590 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.847 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.847 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.847 "name": "raid_bdev1", 00:11:44.847 "uuid": "4caa0ec3-fefe-4096-9365-e3286a5b2c61", 00:11:44.847 "strip_size_kb": 64, 00:11:44.847 "state": "online", 00:11:44.847 "raid_level": "raid5f", 00:11:44.847 "superblock": true, 00:11:44.847 "num_base_bdevs": 3, 00:11:44.847 "num_base_bdevs_discovered": 2, 00:11:44.847 "num_base_bdevs_operational": 2, 00:11:44.847 "base_bdevs_list": [ 00:11:44.847 { 00:11:44.847 "name": null, 00:11:44.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.847 "is_configured": false, 00:11:44.847 "data_offset": 0, 00:11:44.847 "data_size": 63488 00:11:44.847 }, 00:11:44.847 { 00:11:44.847 "name": "pt2", 00:11:44.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.847 "is_configured": true, 00:11:44.847 "data_offset": 2048, 00:11:44.847 "data_size": 63488 00:11:44.847 }, 00:11:44.847 { 00:11:44.847 "name": "pt3", 00:11:44.847 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:44.847 "is_configured": true, 00:11:44.847 "data_offset": 2048, 00:11:44.847 "data_size": 63488 00:11:44.847 } 00:11:44.847 ] 00:11:44.847 }' 00:11:44.847 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.847 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.104 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.105 [2024-11-26 19:52:35.828164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.105 [2024-11-26 19:52:35.828190] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.105 [2024-11-26 19:52:35.828262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.105 [2024-11-26 19:52:35.828316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.105 [2024-11-26 19:52:35.828328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.105 [2024-11-26 19:52:35.888146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:45.105 [2024-11-26 19:52:35.888194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.105 [2024-11-26 19:52:35.888209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:45.105 [2024-11-26 19:52:35.888218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.105 [2024-11-26 19:52:35.890260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.105 [2024-11-26 19:52:35.890383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:45.105 [2024-11-26 19:52:35.890463] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:45.105 [2024-11-26 19:52:35.890506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.105 pt2 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.105 "name": "raid_bdev1", 00:11:45.105 "uuid": "4caa0ec3-fefe-4096-9365-e3286a5b2c61", 00:11:45.105 "strip_size_kb": 64, 00:11:45.105 "state": "configuring", 00:11:45.105 "raid_level": "raid5f", 00:11:45.105 "superblock": true, 00:11:45.105 "num_base_bdevs": 3, 00:11:45.105 "num_base_bdevs_discovered": 1, 00:11:45.105 "num_base_bdevs_operational": 2, 00:11:45.105 "base_bdevs_list": [ 00:11:45.105 { 00:11:45.105 "name": null, 00:11:45.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.105 "is_configured": false, 00:11:45.105 "data_offset": 2048, 00:11:45.105 "data_size": 63488 00:11:45.105 }, 00:11:45.105 { 00:11:45.105 "name": "pt2", 00:11:45.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.105 "is_configured": true, 00:11:45.105 "data_offset": 2048, 00:11:45.105 "data_size": 63488 00:11:45.105 }, 00:11:45.105 { 00:11:45.105 "name": null, 00:11:45.105 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.105 "is_configured": false, 00:11:45.105 "data_offset": 2048, 00:11:45.105 "data_size": 63488 00:11:45.105 } 00:11:45.105 ] 00:11:45.105 }' 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.105 19:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.362 [2024-11-26 19:52:36.204227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:45.362 [2024-11-26 19:52:36.204289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.362 [2024-11-26 19:52:36.204306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:45.362 [2024-11-26 19:52:36.204316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.362 [2024-11-26 19:52:36.204747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.362 [2024-11-26 19:52:36.204770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:45.362 [2024-11-26 19:52:36.204838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:45.362 [2024-11-26 19:52:36.204859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:45.362 [2024-11-26 19:52:36.204954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:45.362 [2024-11-26 19:52:36.204967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:45.362 [2024-11-26 19:52:36.205181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:45.362 [2024-11-26 19:52:36.208254] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:45.362 [2024-11-26 19:52:36.208331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:45.362 [2024-11-26 19:52:36.208675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.362 pt3 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.362 "name": "raid_bdev1", 00:11:45.362 "uuid": "4caa0ec3-fefe-4096-9365-e3286a5b2c61", 00:11:45.362 "strip_size_kb": 64, 00:11:45.362 "state": "online", 00:11:45.362 "raid_level": "raid5f", 00:11:45.362 "superblock": true, 00:11:45.362 "num_base_bdevs": 3, 00:11:45.362 "num_base_bdevs_discovered": 2, 00:11:45.362 "num_base_bdevs_operational": 2, 00:11:45.362 "base_bdevs_list": [ 00:11:45.362 { 00:11:45.362 "name": null, 00:11:45.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.362 "is_configured": false, 00:11:45.362 "data_offset": 2048, 00:11:45.362 "data_size": 63488 00:11:45.362 }, 00:11:45.362 { 00:11:45.362 "name": "pt2", 00:11:45.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.362 "is_configured": true, 00:11:45.362 "data_offset": 2048, 00:11:45.362 "data_size": 63488 00:11:45.362 }, 00:11:45.362 { 00:11:45.362 "name": "pt3", 00:11:45.362 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.362 "is_configured": true, 00:11:45.362 "data_offset": 2048, 00:11:45.362 "data_size": 63488 00:11:45.362 } 00:11:45.362 ] 00:11:45.362 }' 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.362 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.619 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:45.619 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.619 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.619 [2024-11-26 19:52:36.532598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.619 [2024-11-26 19:52:36.532624] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.619 [2024-11-26 19:52:36.532691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.619 [2024-11-26 19:52:36.532752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.619 [2024-11-26 19:52:36.532760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:45.619 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.619 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:45.619 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.619 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.619 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.619 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.876 [2024-11-26 19:52:36.580611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:45.876 [2024-11-26 19:52:36.580659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.876 [2024-11-26 19:52:36.580676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:45.876 [2024-11-26 19:52:36.580684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.876 [2024-11-26 19:52:36.582691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.876 [2024-11-26 19:52:36.582720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:45.876 [2024-11-26 19:52:36.582786] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:45.876 [2024-11-26 19:52:36.582826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.876 [2024-11-26 19:52:36.582944] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:45.876 [2024-11-26 19:52:36.582956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.876 [2024-11-26 19:52:36.582970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:45.876 [2024-11-26 19:52:36.583010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.876 pt1 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.876 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.877 "name": "raid_bdev1", 00:11:45.877 "uuid": "4caa0ec3-fefe-4096-9365-e3286a5b2c61", 00:11:45.877 "strip_size_kb": 64, 00:11:45.877 "state": "configuring", 00:11:45.877 "raid_level": "raid5f", 00:11:45.877 "superblock": true, 00:11:45.877 "num_base_bdevs": 3, 00:11:45.877 "num_base_bdevs_discovered": 1, 00:11:45.877 "num_base_bdevs_operational": 2, 00:11:45.877 "base_bdevs_list": [ 00:11:45.877 { 00:11:45.877 "name": null, 00:11:45.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.877 "is_configured": false, 00:11:45.877 "data_offset": 2048, 00:11:45.877 "data_size": 63488 00:11:45.877 }, 00:11:45.877 { 00:11:45.877 "name": "pt2", 00:11:45.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.877 "is_configured": true, 00:11:45.877 "data_offset": 2048, 00:11:45.877 "data_size": 63488 00:11:45.877 }, 00:11:45.877 { 00:11:45.877 "name": null, 00:11:45.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.877 "is_configured": false, 00:11:45.877 "data_offset": 2048, 00:11:45.877 "data_size": 63488 00:11:45.877 } 00:11:45.877 ] 00:11:45.877 }' 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.877 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.135 [2024-11-26 19:52:36.932686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:46.135 [2024-11-26 19:52:36.932823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.135 [2024-11-26 19:52:36.932892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:46.135 [2024-11-26 19:52:36.932937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.135 [2024-11-26 19:52:36.933404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.135 [2024-11-26 19:52:36.933486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:46.135 [2024-11-26 19:52:36.933609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:46.135 [2024-11-26 19:52:36.933672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:46.135 [2024-11-26 19:52:36.933789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:46.135 [2024-11-26 19:52:36.933797] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:46.135 [2024-11-26 19:52:36.934018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:46.135 [2024-11-26 19:52:36.936961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:46.135 [2024-11-26 19:52:36.936981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:46.135 [2024-11-26 19:52:36.937173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.135 pt3 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.135 "name": "raid_bdev1", 00:11:46.135 "uuid": "4caa0ec3-fefe-4096-9365-e3286a5b2c61", 00:11:46.135 "strip_size_kb": 64, 00:11:46.135 "state": "online", 00:11:46.135 "raid_level": "raid5f", 00:11:46.135 "superblock": true, 00:11:46.135 "num_base_bdevs": 3, 00:11:46.135 "num_base_bdevs_discovered": 2, 00:11:46.135 "num_base_bdevs_operational": 2, 00:11:46.135 "base_bdevs_list": [ 00:11:46.135 { 00:11:46.135 "name": null, 00:11:46.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.135 "is_configured": false, 00:11:46.135 "data_offset": 2048, 00:11:46.135 "data_size": 63488 00:11:46.135 }, 00:11:46.135 { 00:11:46.135 "name": "pt2", 00:11:46.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.135 "is_configured": true, 00:11:46.135 "data_offset": 2048, 00:11:46.135 "data_size": 63488 00:11:46.135 }, 00:11:46.135 { 00:11:46.135 "name": "pt3", 00:11:46.135 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.135 "is_configured": true, 00:11:46.135 "data_offset": 2048, 00:11:46.135 "data_size": 63488 00:11:46.135 } 00:11:46.135 ] 00:11:46.135 }' 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.135 19:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.392 19:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:46.392 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.392 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.392 19:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:46.392 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.392 19:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:46.392 19:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.392 19:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:46.392 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.392 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.392 [2024-11-26 19:52:37.277355] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.392 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.393 19:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4caa0ec3-fefe-4096-9365-e3286a5b2c61 '!=' 4caa0ec3-fefe-4096-9365-e3286a5b2c61 ']' 00:11:46.393 19:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78836 00:11:46.393 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 78836 ']' 00:11:46.393 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 78836 00:11:46.393 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:46.393 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.393 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78836 00:11:46.650 killing process with pid 78836 00:11:46.650 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.650 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.650 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78836' 00:11:46.650 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 78836 00:11:46.650 [2024-11-26 19:52:37.329752] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:46.650 19:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 78836 00:11:46.650 [2024-11-26 19:52:37.329840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.651 [2024-11-26 19:52:37.329899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.651 [2024-11-26 19:52:37.329909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:46.651 [2024-11-26 19:52:37.488749] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:47.216 19:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:47.216 00:11:47.216 real 0m5.590s 00:11:47.216 user 0m8.774s 00:11:47.216 sys 0m0.967s 00:11:47.216 ************************************ 00:11:47.216 END TEST raid5f_superblock_test 00:11:47.216 ************************************ 00:11:47.216 19:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.216 19:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.474 19:52:38 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:11:47.474 19:52:38 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:11:47.474 19:52:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:47.474 19:52:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.474 19:52:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:47.474 ************************************ 00:11:47.474 START TEST raid5f_rebuild_test 00:11:47.474 ************************************ 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:47.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=79259 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 79259 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 79259 ']' 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.474 19:52:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:47.474 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:47.474 Zero copy mechanism will not be used. 00:11:47.474 [2024-11-26 19:52:38.242191] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:11:47.474 [2024-11-26 19:52:38.242324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79259 ] 00:11:47.474 [2024-11-26 19:52:38.401121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.732 [2024-11-26 19:52:38.503788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.732 [2024-11-26 19:52:38.630822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.732 [2024-11-26 19:52:38.630862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.299 BaseBdev1_malloc 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.299 [2024-11-26 19:52:39.111603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:48.299 [2024-11-26 19:52:39.111661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.299 [2024-11-26 19:52:39.111680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:48.299 [2024-11-26 19:52:39.111691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.299 [2024-11-26 19:52:39.113602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.299 [2024-11-26 19:52:39.113730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:48.299 BaseBdev1 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.299 BaseBdev2_malloc 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.299 [2024-11-26 19:52:39.146186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:48.299 [2024-11-26 19:52:39.146233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.299 [2024-11-26 19:52:39.146252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:48.299 [2024-11-26 19:52:39.146262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.299 [2024-11-26 19:52:39.148096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.299 [2024-11-26 19:52:39.148125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:48.299 BaseBdev2 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.299 BaseBdev3_malloc 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.299 [2024-11-26 19:52:39.201189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:48.299 [2024-11-26 19:52:39.201331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.299 [2024-11-26 19:52:39.201368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:48.299 [2024-11-26 19:52:39.201379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.299 [2024-11-26 19:52:39.203226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.299 [2024-11-26 19:52:39.203258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:48.299 BaseBdev3 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.299 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.556 spare_malloc 00:11:48.556 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.556 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:48.556 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.556 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.556 spare_delay 00:11:48.556 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.556 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:48.556 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.556 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.556 [2024-11-26 19:52:39.243698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:48.556 [2024-11-26 19:52:39.243820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.556 [2024-11-26 19:52:39.243838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:48.556 [2024-11-26 19:52:39.243848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.556 [2024-11-26 19:52:39.245767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.556 [2024-11-26 19:52:39.245799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:48.556 spare 00:11:48.556 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.556 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:11:48.556 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.556 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.556 [2024-11-26 19:52:39.251761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.557 [2024-11-26 19:52:39.253387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.557 [2024-11-26 19:52:39.253440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:48.557 [2024-11-26 19:52:39.253506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:48.557 [2024-11-26 19:52:39.253515] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:48.557 [2024-11-26 19:52:39.253735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:48.557 [2024-11-26 19:52:39.256788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:48.557 [2024-11-26 19:52:39.256803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:48.557 [2024-11-26 19:52:39.256945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.557 "name": "raid_bdev1", 00:11:48.557 "uuid": "935b694d-2902-4e80-8b2d-9811735152ac", 00:11:48.557 "strip_size_kb": 64, 00:11:48.557 "state": "online", 00:11:48.557 "raid_level": "raid5f", 00:11:48.557 "superblock": false, 00:11:48.557 "num_base_bdevs": 3, 00:11:48.557 "num_base_bdevs_discovered": 3, 00:11:48.557 "num_base_bdevs_operational": 3, 00:11:48.557 "base_bdevs_list": [ 00:11:48.557 { 00:11:48.557 "name": "BaseBdev1", 00:11:48.557 "uuid": "1c988c08-568e-5106-b52c-27703f5ccabe", 00:11:48.557 "is_configured": true, 00:11:48.557 "data_offset": 0, 00:11:48.557 "data_size": 65536 00:11:48.557 }, 00:11:48.557 { 00:11:48.557 "name": "BaseBdev2", 00:11:48.557 "uuid": "f0f977c7-f312-5052-8b5e-7440b2aec0f0", 00:11:48.557 "is_configured": true, 00:11:48.557 "data_offset": 0, 00:11:48.557 "data_size": 65536 00:11:48.557 }, 00:11:48.557 { 00:11:48.557 "name": "BaseBdev3", 00:11:48.557 "uuid": "0b2647a1-5f1f-5e63-be86-bd4b8a0d4c1e", 00:11:48.557 "is_configured": true, 00:11:48.557 "data_offset": 0, 00:11:48.557 "data_size": 65536 00:11:48.557 } 00:11:48.557 ] 00:11:48.557 }' 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.557 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.814 [2024-11-26 19:52:39.569192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:48.814 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:48.815 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:48.815 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:48.815 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:48.815 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:48.815 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:48.815 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:48.815 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:48.815 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:48.815 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:48.815 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:49.073 [2024-11-26 19:52:39.809168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:49.073 /dev/nbd0 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.073 1+0 records in 00:11:49.073 1+0 records out 00:11:49.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193462 s, 21.2 MB/s 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:11:49.073 19:52:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:11:49.331 512+0 records in 00:11:49.331 512+0 records out 00:11:49.331 67108864 bytes (67 MB, 64 MiB) copied, 0.38445 s, 175 MB/s 00:11:49.331 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:49.331 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:49.331 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:49.331 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:49.331 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:49.331 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.331 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:49.589 [2024-11-26 19:52:40.469605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.589 [2024-11-26 19:52:40.482184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.589 "name": "raid_bdev1", 00:11:49.589 "uuid": "935b694d-2902-4e80-8b2d-9811735152ac", 00:11:49.589 "strip_size_kb": 64, 00:11:49.589 "state": "online", 00:11:49.589 "raid_level": "raid5f", 00:11:49.589 "superblock": false, 00:11:49.589 "num_base_bdevs": 3, 00:11:49.589 "num_base_bdevs_discovered": 2, 00:11:49.589 "num_base_bdevs_operational": 2, 00:11:49.589 "base_bdevs_list": [ 00:11:49.589 { 00:11:49.589 "name": null, 00:11:49.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.589 "is_configured": false, 00:11:49.589 "data_offset": 0, 00:11:49.589 "data_size": 65536 00:11:49.589 }, 00:11:49.589 { 00:11:49.589 "name": "BaseBdev2", 00:11:49.589 "uuid": "f0f977c7-f312-5052-8b5e-7440b2aec0f0", 00:11:49.589 "is_configured": true, 00:11:49.589 "data_offset": 0, 00:11:49.589 "data_size": 65536 00:11:49.589 }, 00:11:49.589 { 00:11:49.589 "name": "BaseBdev3", 00:11:49.589 "uuid": "0b2647a1-5f1f-5e63-be86-bd4b8a0d4c1e", 00:11:49.589 "is_configured": true, 00:11:49.589 "data_offset": 0, 00:11:49.589 "data_size": 65536 00:11:49.589 } 00:11:49.589 ] 00:11:49.589 }' 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.589 19:52:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.155 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:50.155 19:52:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.155 19:52:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.155 [2024-11-26 19:52:40.794286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:50.155 [2024-11-26 19:52:40.805923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:11:50.155 19:52:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.155 19:52:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:50.155 [2024-11-26 19:52:40.811631] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:51.089 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.089 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.089 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.089 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.089 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.089 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.089 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.089 19:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.089 19:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.089 19:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.089 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.089 "name": "raid_bdev1", 00:11:51.089 "uuid": "935b694d-2902-4e80-8b2d-9811735152ac", 00:11:51.089 "strip_size_kb": 64, 00:11:51.089 "state": "online", 00:11:51.089 "raid_level": "raid5f", 00:11:51.089 "superblock": false, 00:11:51.089 "num_base_bdevs": 3, 00:11:51.089 "num_base_bdevs_discovered": 3, 00:11:51.089 "num_base_bdevs_operational": 3, 00:11:51.089 "process": { 00:11:51.089 "type": "rebuild", 00:11:51.089 "target": "spare", 00:11:51.089 "progress": { 00:11:51.089 "blocks": 18432, 00:11:51.089 "percent": 14 00:11:51.089 } 00:11:51.089 }, 00:11:51.089 "base_bdevs_list": [ 00:11:51.089 { 00:11:51.089 "name": "spare", 00:11:51.090 "uuid": "b50057c8-0895-59bd-a78e-30d7fbdfbb60", 00:11:51.090 "is_configured": true, 00:11:51.090 "data_offset": 0, 00:11:51.090 "data_size": 65536 00:11:51.090 }, 00:11:51.090 { 00:11:51.090 "name": "BaseBdev2", 00:11:51.090 "uuid": "f0f977c7-f312-5052-8b5e-7440b2aec0f0", 00:11:51.090 "is_configured": true, 00:11:51.090 "data_offset": 0, 00:11:51.090 "data_size": 65536 00:11:51.090 }, 00:11:51.090 { 00:11:51.090 "name": "BaseBdev3", 00:11:51.090 "uuid": "0b2647a1-5f1f-5e63-be86-bd4b8a0d4c1e", 00:11:51.090 "is_configured": true, 00:11:51.090 "data_offset": 0, 00:11:51.090 "data_size": 65536 00:11:51.090 } 00:11:51.090 ] 00:11:51.090 }' 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.090 [2024-11-26 19:52:41.912769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:51.090 [2024-11-26 19:52:41.923681] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:51.090 [2024-11-26 19:52:41.923742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.090 [2024-11-26 19:52:41.923762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:51.090 [2024-11-26 19:52:41.923771] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.090 "name": "raid_bdev1", 00:11:51.090 "uuid": "935b694d-2902-4e80-8b2d-9811735152ac", 00:11:51.090 "strip_size_kb": 64, 00:11:51.090 "state": "online", 00:11:51.090 "raid_level": "raid5f", 00:11:51.090 "superblock": false, 00:11:51.090 "num_base_bdevs": 3, 00:11:51.090 "num_base_bdevs_discovered": 2, 00:11:51.090 "num_base_bdevs_operational": 2, 00:11:51.090 "base_bdevs_list": [ 00:11:51.090 { 00:11:51.090 "name": null, 00:11:51.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.090 "is_configured": false, 00:11:51.090 "data_offset": 0, 00:11:51.090 "data_size": 65536 00:11:51.090 }, 00:11:51.090 { 00:11:51.090 "name": "BaseBdev2", 00:11:51.090 "uuid": "f0f977c7-f312-5052-8b5e-7440b2aec0f0", 00:11:51.090 "is_configured": true, 00:11:51.090 "data_offset": 0, 00:11:51.090 "data_size": 65536 00:11:51.090 }, 00:11:51.090 { 00:11:51.090 "name": "BaseBdev3", 00:11:51.090 "uuid": "0b2647a1-5f1f-5e63-be86-bd4b8a0d4c1e", 00:11:51.090 "is_configured": true, 00:11:51.090 "data_offset": 0, 00:11:51.090 "data_size": 65536 00:11:51.090 } 00:11:51.090 ] 00:11:51.090 }' 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.090 19:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.348 19:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:51.348 19:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.348 19:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:51.348 19:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:51.348 19:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.348 19:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.348 19:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.348 19:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.348 19:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.348 19:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.606 19:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.606 "name": "raid_bdev1", 00:11:51.606 "uuid": "935b694d-2902-4e80-8b2d-9811735152ac", 00:11:51.606 "strip_size_kb": 64, 00:11:51.606 "state": "online", 00:11:51.606 "raid_level": "raid5f", 00:11:51.606 "superblock": false, 00:11:51.606 "num_base_bdevs": 3, 00:11:51.606 "num_base_bdevs_discovered": 2, 00:11:51.606 "num_base_bdevs_operational": 2, 00:11:51.606 "base_bdevs_list": [ 00:11:51.606 { 00:11:51.606 "name": null, 00:11:51.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.606 "is_configured": false, 00:11:51.606 "data_offset": 0, 00:11:51.606 "data_size": 65536 00:11:51.606 }, 00:11:51.606 { 00:11:51.606 "name": "BaseBdev2", 00:11:51.606 "uuid": "f0f977c7-f312-5052-8b5e-7440b2aec0f0", 00:11:51.606 "is_configured": true, 00:11:51.606 "data_offset": 0, 00:11:51.606 "data_size": 65536 00:11:51.606 }, 00:11:51.606 { 00:11:51.606 "name": "BaseBdev3", 00:11:51.606 "uuid": "0b2647a1-5f1f-5e63-be86-bd4b8a0d4c1e", 00:11:51.606 "is_configured": true, 00:11:51.606 "data_offset": 0, 00:11:51.606 "data_size": 65536 00:11:51.606 } 00:11:51.606 ] 00:11:51.606 }' 00:11:51.606 19:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.606 19:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:51.606 19:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.606 19:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:51.606 19:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:51.606 19:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.606 19:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.606 [2024-11-26 19:52:42.356331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:51.606 [2024-11-26 19:52:42.366970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:11:51.606 19:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.606 19:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:51.606 [2024-11-26 19:52:42.372505] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.539 "name": "raid_bdev1", 00:11:52.539 "uuid": "935b694d-2902-4e80-8b2d-9811735152ac", 00:11:52.539 "strip_size_kb": 64, 00:11:52.539 "state": "online", 00:11:52.539 "raid_level": "raid5f", 00:11:52.539 "superblock": false, 00:11:52.539 "num_base_bdevs": 3, 00:11:52.539 "num_base_bdevs_discovered": 3, 00:11:52.539 "num_base_bdevs_operational": 3, 00:11:52.539 "process": { 00:11:52.539 "type": "rebuild", 00:11:52.539 "target": "spare", 00:11:52.539 "progress": { 00:11:52.539 "blocks": 18432, 00:11:52.539 "percent": 14 00:11:52.539 } 00:11:52.539 }, 00:11:52.539 "base_bdevs_list": [ 00:11:52.539 { 00:11:52.539 "name": "spare", 00:11:52.539 "uuid": "b50057c8-0895-59bd-a78e-30d7fbdfbb60", 00:11:52.539 "is_configured": true, 00:11:52.539 "data_offset": 0, 00:11:52.539 "data_size": 65536 00:11:52.539 }, 00:11:52.539 { 00:11:52.539 "name": "BaseBdev2", 00:11:52.539 "uuid": "f0f977c7-f312-5052-8b5e-7440b2aec0f0", 00:11:52.539 "is_configured": true, 00:11:52.539 "data_offset": 0, 00:11:52.539 "data_size": 65536 00:11:52.539 }, 00:11:52.539 { 00:11:52.539 "name": "BaseBdev3", 00:11:52.539 "uuid": "0b2647a1-5f1f-5e63-be86-bd4b8a0d4c1e", 00:11:52.539 "is_configured": true, 00:11:52.539 "data_offset": 0, 00:11:52.539 "data_size": 65536 00:11:52.539 } 00:11:52.539 ] 00:11:52.539 }' 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=433 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.539 19:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.797 19:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.797 19:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.797 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.797 "name": "raid_bdev1", 00:11:52.797 "uuid": "935b694d-2902-4e80-8b2d-9811735152ac", 00:11:52.797 "strip_size_kb": 64, 00:11:52.797 "state": "online", 00:11:52.797 "raid_level": "raid5f", 00:11:52.797 "superblock": false, 00:11:52.797 "num_base_bdevs": 3, 00:11:52.797 "num_base_bdevs_discovered": 3, 00:11:52.797 "num_base_bdevs_operational": 3, 00:11:52.797 "process": { 00:11:52.797 "type": "rebuild", 00:11:52.797 "target": "spare", 00:11:52.797 "progress": { 00:11:52.797 "blocks": 20480, 00:11:52.797 "percent": 15 00:11:52.797 } 00:11:52.797 }, 00:11:52.797 "base_bdevs_list": [ 00:11:52.797 { 00:11:52.797 "name": "spare", 00:11:52.797 "uuid": "b50057c8-0895-59bd-a78e-30d7fbdfbb60", 00:11:52.797 "is_configured": true, 00:11:52.797 "data_offset": 0, 00:11:52.797 "data_size": 65536 00:11:52.797 }, 00:11:52.797 { 00:11:52.797 "name": "BaseBdev2", 00:11:52.797 "uuid": "f0f977c7-f312-5052-8b5e-7440b2aec0f0", 00:11:52.797 "is_configured": true, 00:11:52.797 "data_offset": 0, 00:11:52.797 "data_size": 65536 00:11:52.797 }, 00:11:52.797 { 00:11:52.797 "name": "BaseBdev3", 00:11:52.797 "uuid": "0b2647a1-5f1f-5e63-be86-bd4b8a0d4c1e", 00:11:52.797 "is_configured": true, 00:11:52.797 "data_offset": 0, 00:11:52.797 "data_size": 65536 00:11:52.797 } 00:11:52.797 ] 00:11:52.797 }' 00:11:52.797 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.797 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:52.797 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.797 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.797 19:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.730 "name": "raid_bdev1", 00:11:53.730 "uuid": "935b694d-2902-4e80-8b2d-9811735152ac", 00:11:53.730 "strip_size_kb": 64, 00:11:53.730 "state": "online", 00:11:53.730 "raid_level": "raid5f", 00:11:53.730 "superblock": false, 00:11:53.730 "num_base_bdevs": 3, 00:11:53.730 "num_base_bdevs_discovered": 3, 00:11:53.730 "num_base_bdevs_operational": 3, 00:11:53.730 "process": { 00:11:53.730 "type": "rebuild", 00:11:53.730 "target": "spare", 00:11:53.730 "progress": { 00:11:53.730 "blocks": 43008, 00:11:53.730 "percent": 32 00:11:53.730 } 00:11:53.730 }, 00:11:53.730 "base_bdevs_list": [ 00:11:53.730 { 00:11:53.730 "name": "spare", 00:11:53.730 "uuid": "b50057c8-0895-59bd-a78e-30d7fbdfbb60", 00:11:53.730 "is_configured": true, 00:11:53.730 "data_offset": 0, 00:11:53.730 "data_size": 65536 00:11:53.730 }, 00:11:53.730 { 00:11:53.730 "name": "BaseBdev2", 00:11:53.730 "uuid": "f0f977c7-f312-5052-8b5e-7440b2aec0f0", 00:11:53.730 "is_configured": true, 00:11:53.730 "data_offset": 0, 00:11:53.730 "data_size": 65536 00:11:53.730 }, 00:11:53.730 { 00:11:53.730 "name": "BaseBdev3", 00:11:53.730 "uuid": "0b2647a1-5f1f-5e63-be86-bd4b8a0d4c1e", 00:11:53.730 "is_configured": true, 00:11:53.730 "data_offset": 0, 00:11:53.730 "data_size": 65536 00:11:53.730 } 00:11:53.730 ] 00:11:53.730 }' 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:53.730 19:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:55.122 19:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:55.122 19:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:55.122 19:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.122 19:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:55.122 19:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:55.122 19:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.122 19:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.122 19:52:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.122 19:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.122 19:52:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.122 19:52:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.122 19:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.122 "name": "raid_bdev1", 00:11:55.122 "uuid": "935b694d-2902-4e80-8b2d-9811735152ac", 00:11:55.122 "strip_size_kb": 64, 00:11:55.122 "state": "online", 00:11:55.122 "raid_level": "raid5f", 00:11:55.122 "superblock": false, 00:11:55.122 "num_base_bdevs": 3, 00:11:55.122 "num_base_bdevs_discovered": 3, 00:11:55.122 "num_base_bdevs_operational": 3, 00:11:55.122 "process": { 00:11:55.122 "type": "rebuild", 00:11:55.122 "target": "spare", 00:11:55.122 "progress": { 00:11:55.122 "blocks": 65536, 00:11:55.122 "percent": 50 00:11:55.122 } 00:11:55.122 }, 00:11:55.122 "base_bdevs_list": [ 00:11:55.122 { 00:11:55.122 "name": "spare", 00:11:55.122 "uuid": "b50057c8-0895-59bd-a78e-30d7fbdfbb60", 00:11:55.122 "is_configured": true, 00:11:55.122 "data_offset": 0, 00:11:55.122 "data_size": 65536 00:11:55.122 }, 00:11:55.122 { 00:11:55.122 "name": "BaseBdev2", 00:11:55.122 "uuid": "f0f977c7-f312-5052-8b5e-7440b2aec0f0", 00:11:55.122 "is_configured": true, 00:11:55.122 "data_offset": 0, 00:11:55.122 "data_size": 65536 00:11:55.123 }, 00:11:55.123 { 00:11:55.123 "name": "BaseBdev3", 00:11:55.123 "uuid": "0b2647a1-5f1f-5e63-be86-bd4b8a0d4c1e", 00:11:55.123 "is_configured": true, 00:11:55.123 "data_offset": 0, 00:11:55.123 "data_size": 65536 00:11:55.123 } 00:11:55.123 ] 00:11:55.123 }' 00:11:55.123 19:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.123 19:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:55.123 19:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.123 19:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.123 19:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.058 "name": "raid_bdev1", 00:11:56.058 "uuid": "935b694d-2902-4e80-8b2d-9811735152ac", 00:11:56.058 "strip_size_kb": 64, 00:11:56.058 "state": "online", 00:11:56.058 "raid_level": "raid5f", 00:11:56.058 "superblock": false, 00:11:56.058 "num_base_bdevs": 3, 00:11:56.058 "num_base_bdevs_discovered": 3, 00:11:56.058 "num_base_bdevs_operational": 3, 00:11:56.058 "process": { 00:11:56.058 "type": "rebuild", 00:11:56.058 "target": "spare", 00:11:56.058 "progress": { 00:11:56.058 "blocks": 88064, 00:11:56.058 "percent": 67 00:11:56.058 } 00:11:56.058 }, 00:11:56.058 "base_bdevs_list": [ 00:11:56.058 { 00:11:56.058 "name": "spare", 00:11:56.058 "uuid": "b50057c8-0895-59bd-a78e-30d7fbdfbb60", 00:11:56.058 "is_configured": true, 00:11:56.058 "data_offset": 0, 00:11:56.058 "data_size": 65536 00:11:56.058 }, 00:11:56.058 { 00:11:56.058 "name": "BaseBdev2", 00:11:56.058 "uuid": "f0f977c7-f312-5052-8b5e-7440b2aec0f0", 00:11:56.058 "is_configured": true, 00:11:56.058 "data_offset": 0, 00:11:56.058 "data_size": 65536 00:11:56.058 }, 00:11:56.058 { 00:11:56.058 "name": "BaseBdev3", 00:11:56.058 "uuid": "0b2647a1-5f1f-5e63-be86-bd4b8a0d4c1e", 00:11:56.058 "is_configured": true, 00:11:56.058 "data_offset": 0, 00:11:56.058 "data_size": 65536 00:11:56.058 } 00:11:56.058 ] 00:11:56.058 }' 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:56.058 19:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:56.992 19:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:56.992 19:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:56.992 19:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.992 19:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:56.992 19:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:56.992 19:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.992 19:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.992 19:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.992 19:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.992 19:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.992 19:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.992 19:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.992 "name": "raid_bdev1", 00:11:56.992 "uuid": "935b694d-2902-4e80-8b2d-9811735152ac", 00:11:56.992 "strip_size_kb": 64, 00:11:56.992 "state": "online", 00:11:56.992 "raid_level": "raid5f", 00:11:56.992 "superblock": false, 00:11:56.992 "num_base_bdevs": 3, 00:11:56.992 "num_base_bdevs_discovered": 3, 00:11:56.992 "num_base_bdevs_operational": 3, 00:11:56.992 "process": { 00:11:56.992 "type": "rebuild", 00:11:56.992 "target": "spare", 00:11:56.992 "progress": { 00:11:56.992 "blocks": 110592, 00:11:56.992 "percent": 84 00:11:56.992 } 00:11:56.992 }, 00:11:56.992 "base_bdevs_list": [ 00:11:56.992 { 00:11:56.992 "name": "spare", 00:11:56.992 "uuid": "b50057c8-0895-59bd-a78e-30d7fbdfbb60", 00:11:56.992 "is_configured": true, 00:11:56.992 "data_offset": 0, 00:11:56.992 "data_size": 65536 00:11:56.992 }, 00:11:56.992 { 00:11:56.992 "name": "BaseBdev2", 00:11:56.992 "uuid": "f0f977c7-f312-5052-8b5e-7440b2aec0f0", 00:11:56.992 "is_configured": true, 00:11:56.992 "data_offset": 0, 00:11:56.992 "data_size": 65536 00:11:56.992 }, 00:11:56.992 { 00:11:56.992 "name": "BaseBdev3", 00:11:56.992 "uuid": "0b2647a1-5f1f-5e63-be86-bd4b8a0d4c1e", 00:11:56.992 "is_configured": true, 00:11:56.992 "data_offset": 0, 00:11:56.992 "data_size": 65536 00:11:56.992 } 00:11:56.992 ] 00:11:56.992 }' 00:11:56.992 19:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.250 19:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:57.250 19:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.250 19:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:57.250 19:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:58.184 [2024-11-26 19:52:48.830890] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:58.184 [2024-11-26 19:52:48.830993] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:58.184 [2024-11-26 19:52:48.831034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.184 19:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:58.184 19:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.184 19:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.184 19:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.184 19:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.184 19:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.184 19:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.184 19:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.184 19:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.184 19:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.184 19:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.184 19:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.184 "name": "raid_bdev1", 00:11:58.184 "uuid": "935b694d-2902-4e80-8b2d-9811735152ac", 00:11:58.184 "strip_size_kb": 64, 00:11:58.184 "state": "online", 00:11:58.184 "raid_level": "raid5f", 00:11:58.184 "superblock": false, 00:11:58.184 "num_base_bdevs": 3, 00:11:58.184 "num_base_bdevs_discovered": 3, 00:11:58.184 "num_base_bdevs_operational": 3, 00:11:58.184 "base_bdevs_list": [ 00:11:58.184 { 00:11:58.184 "name": "spare", 00:11:58.184 "uuid": "b50057c8-0895-59bd-a78e-30d7fbdfbb60", 00:11:58.184 "is_configured": true, 00:11:58.184 "data_offset": 0, 00:11:58.184 "data_size": 65536 00:11:58.184 }, 00:11:58.184 { 00:11:58.184 "name": "BaseBdev2", 00:11:58.184 "uuid": "f0f977c7-f312-5052-8b5e-7440b2aec0f0", 00:11:58.184 "is_configured": true, 00:11:58.184 "data_offset": 0, 00:11:58.184 "data_size": 65536 00:11:58.184 }, 00:11:58.184 { 00:11:58.184 "name": "BaseBdev3", 00:11:58.184 "uuid": "0b2647a1-5f1f-5e63-be86-bd4b8a0d4c1e", 00:11:58.184 "is_configured": true, 00:11:58.184 "data_offset": 0, 00:11:58.184 "data_size": 65536 00:11:58.184 } 00:11:58.184 ] 00:11:58.184 }' 00:11:58.184 19:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.184 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.184 "name": "raid_bdev1", 00:11:58.184 "uuid": "935b694d-2902-4e80-8b2d-9811735152ac", 00:11:58.184 "strip_size_kb": 64, 00:11:58.184 "state": "online", 00:11:58.184 "raid_level": "raid5f", 00:11:58.184 "superblock": false, 00:11:58.184 "num_base_bdevs": 3, 00:11:58.184 "num_base_bdevs_discovered": 3, 00:11:58.184 "num_base_bdevs_operational": 3, 00:11:58.184 "base_bdevs_list": [ 00:11:58.184 { 00:11:58.184 "name": "spare", 00:11:58.184 "uuid": "b50057c8-0895-59bd-a78e-30d7fbdfbb60", 00:11:58.184 "is_configured": true, 00:11:58.184 "data_offset": 0, 00:11:58.184 "data_size": 65536 00:11:58.184 }, 00:11:58.184 { 00:11:58.184 "name": "BaseBdev2", 00:11:58.184 "uuid": "f0f977c7-f312-5052-8b5e-7440b2aec0f0", 00:11:58.184 "is_configured": true, 00:11:58.185 "data_offset": 0, 00:11:58.185 "data_size": 65536 00:11:58.185 }, 00:11:58.185 { 00:11:58.185 "name": "BaseBdev3", 00:11:58.185 "uuid": "0b2647a1-5f1f-5e63-be86-bd4b8a0d4c1e", 00:11:58.185 "is_configured": true, 00:11:58.185 "data_offset": 0, 00:11:58.185 "data_size": 65536 00:11:58.185 } 00:11:58.185 ] 00:11:58.185 }' 00:11:58.185 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.442 "name": "raid_bdev1", 00:11:58.442 "uuid": "935b694d-2902-4e80-8b2d-9811735152ac", 00:11:58.442 "strip_size_kb": 64, 00:11:58.442 "state": "online", 00:11:58.442 "raid_level": "raid5f", 00:11:58.442 "superblock": false, 00:11:58.442 "num_base_bdevs": 3, 00:11:58.442 "num_base_bdevs_discovered": 3, 00:11:58.442 "num_base_bdevs_operational": 3, 00:11:58.442 "base_bdevs_list": [ 00:11:58.442 { 00:11:58.442 "name": "spare", 00:11:58.442 "uuid": "b50057c8-0895-59bd-a78e-30d7fbdfbb60", 00:11:58.442 "is_configured": true, 00:11:58.442 "data_offset": 0, 00:11:58.442 "data_size": 65536 00:11:58.442 }, 00:11:58.442 { 00:11:58.442 "name": "BaseBdev2", 00:11:58.442 "uuid": "f0f977c7-f312-5052-8b5e-7440b2aec0f0", 00:11:58.442 "is_configured": true, 00:11:58.442 "data_offset": 0, 00:11:58.442 "data_size": 65536 00:11:58.442 }, 00:11:58.442 { 00:11:58.442 "name": "BaseBdev3", 00:11:58.442 "uuid": "0b2647a1-5f1f-5e63-be86-bd4b8a0d4c1e", 00:11:58.442 "is_configured": true, 00:11:58.442 "data_offset": 0, 00:11:58.442 "data_size": 65536 00:11:58.442 } 00:11:58.442 ] 00:11:58.442 }' 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.442 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.700 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.700 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.700 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.700 [2024-11-26 19:52:49.458299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.700 [2024-11-26 19:52:49.458331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.700 [2024-11-26 19:52:49.458424] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.700 [2024-11-26 19:52:49.458507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.700 [2024-11-26 19:52:49.458526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:58.700 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.700 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:58.700 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.700 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.700 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.700 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.700 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:58.700 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:58.700 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:58.700 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:58.700 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:58.701 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:58.701 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:58.701 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:58.701 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:58.701 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:58.701 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:58.701 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:58.701 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:58.960 /dev/nbd0 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.960 1+0 records in 00:11:58.960 1+0 records out 00:11:58.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180297 s, 22.7 MB/s 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:58.960 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:59.217 /dev/nbd1 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:59.217 1+0 records in 00:11:59.217 1+0 records out 00:11:59.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236376 s, 17.3 MB/s 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:59.217 19:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:59.217 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:59.217 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:59.217 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:59.217 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:59.217 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:59.217 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.217 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:59.474 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:59.474 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:59.474 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:59.474 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.474 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.474 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:59.474 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:59.474 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.474 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.474 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:59.731 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:59.731 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:59.731 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:59.731 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.731 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.731 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:59.731 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:59.731 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.731 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:59.732 19:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 79259 00:11:59.732 19:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 79259 ']' 00:11:59.732 19:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 79259 00:11:59.732 19:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:59.732 19:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.732 19:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79259 00:11:59.732 killing process with pid 79259 00:11:59.732 Received shutdown signal, test time was about 60.000000 seconds 00:11:59.732 00:11:59.732 Latency(us) 00:11:59.732 [2024-11-26T19:52:50.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:59.732 [2024-11-26T19:52:50.669Z] =================================================================================================================== 00:11:59.732 [2024-11-26T19:52:50.669Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:59.732 19:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.732 19:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.732 19:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79259' 00:11:59.732 19:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 79259 00:11:59.732 [2024-11-26 19:52:50.522991] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.732 19:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 79259 00:11:59.990 [2024-11-26 19:52:50.729819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.556 ************************************ 00:12:00.556 END TEST raid5f_rebuild_test 00:12:00.556 ************************************ 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:00.556 00:12:00.556 real 0m13.159s 00:12:00.556 user 0m15.742s 00:12:00.556 sys 0m1.609s 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.556 19:52:51 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:12:00.556 19:52:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:00.556 19:52:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.556 19:52:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.556 ************************************ 00:12:00.556 START TEST raid5f_rebuild_test_sb 00:12:00.556 ************************************ 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=79678 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 79678 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 79678 ']' 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.556 19:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.556 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:00.556 Zero copy mechanism will not be used. 00:12:00.556 [2024-11-26 19:52:51.437770] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:12:00.557 [2024-11-26 19:52:51.437881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79678 ] 00:12:00.814 [2024-11-26 19:52:51.586484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.814 [2024-11-26 19:52:51.688303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.072 [2024-11-26 19:52:51.806855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.072 [2024-11-26 19:52:51.806908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.646 BaseBdev1_malloc 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.646 [2024-11-26 19:52:52.319059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:01.646 [2024-11-26 19:52:52.319119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.646 [2024-11-26 19:52:52.319140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:01.646 [2024-11-26 19:52:52.319150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.646 [2024-11-26 19:52:52.321037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.646 [2024-11-26 19:52:52.321071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:01.646 BaseBdev1 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.646 BaseBdev2_malloc 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.646 [2024-11-26 19:52:52.352644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:01.646 [2024-11-26 19:52:52.352691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.646 [2024-11-26 19:52:52.352708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:01.646 [2024-11-26 19:52:52.352717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.646 [2024-11-26 19:52:52.354517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.646 [2024-11-26 19:52:52.354550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:01.646 BaseBdev2 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.646 BaseBdev3_malloc 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.646 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.647 [2024-11-26 19:52:52.403082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:01.647 [2024-11-26 19:52:52.403126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.647 [2024-11-26 19:52:52.403145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:01.647 [2024-11-26 19:52:52.403155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.647 [2024-11-26 19:52:52.404986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.647 [2024-11-26 19:52:52.405020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:01.647 BaseBdev3 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.647 spare_malloc 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.647 spare_delay 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.647 [2024-11-26 19:52:52.444153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:01.647 [2024-11-26 19:52:52.444194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.647 [2024-11-26 19:52:52.444206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:01.647 [2024-11-26 19:52:52.444214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.647 [2024-11-26 19:52:52.446048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.647 [2024-11-26 19:52:52.446081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:01.647 spare 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.647 [2024-11-26 19:52:52.452219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.647 [2024-11-26 19:52:52.453809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.647 [2024-11-26 19:52:52.453868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:01.647 [2024-11-26 19:52:52.454014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:01.647 [2024-11-26 19:52:52.454028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:01.647 [2024-11-26 19:52:52.454239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:01.647 [2024-11-26 19:52:52.457332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:01.647 [2024-11-26 19:52:52.457362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:01.647 [2024-11-26 19:52:52.457501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.647 "name": "raid_bdev1", 00:12:01.647 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:01.647 "strip_size_kb": 64, 00:12:01.647 "state": "online", 00:12:01.647 "raid_level": "raid5f", 00:12:01.647 "superblock": true, 00:12:01.647 "num_base_bdevs": 3, 00:12:01.647 "num_base_bdevs_discovered": 3, 00:12:01.647 "num_base_bdevs_operational": 3, 00:12:01.647 "base_bdevs_list": [ 00:12:01.647 { 00:12:01.647 "name": "BaseBdev1", 00:12:01.647 "uuid": "6197154b-6e72-56d8-9586-23a451edeefc", 00:12:01.647 "is_configured": true, 00:12:01.647 "data_offset": 2048, 00:12:01.647 "data_size": 63488 00:12:01.647 }, 00:12:01.647 { 00:12:01.647 "name": "BaseBdev2", 00:12:01.647 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:01.647 "is_configured": true, 00:12:01.647 "data_offset": 2048, 00:12:01.647 "data_size": 63488 00:12:01.647 }, 00:12:01.647 { 00:12:01.647 "name": "BaseBdev3", 00:12:01.647 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:01.647 "is_configured": true, 00:12:01.647 "data_offset": 2048, 00:12:01.647 "data_size": 63488 00:12:01.647 } 00:12:01.647 ] 00:12:01.647 }' 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.647 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.906 [2024-11-26 19:52:52.781770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:01.906 19:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:02.164 [2024-11-26 19:52:52.973655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:02.164 /dev/nbd0 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:02.164 1+0 records in 00:12:02.164 1+0 records out 00:12:02.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235592 s, 17.4 MB/s 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:12:02.164 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:12:02.728 496+0 records in 00:12:02.728 496+0 records out 00:12:02.728 65011712 bytes (65 MB, 62 MiB) copied, 0.328438 s, 198 MB/s 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:02.728 [2024-11-26 19:52:53.573557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.728 [2024-11-26 19:52:53.593254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.728 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.729 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.729 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.729 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.729 "name": "raid_bdev1", 00:12:02.729 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:02.729 "strip_size_kb": 64, 00:12:02.729 "state": "online", 00:12:02.729 "raid_level": "raid5f", 00:12:02.729 "superblock": true, 00:12:02.729 "num_base_bdevs": 3, 00:12:02.729 "num_base_bdevs_discovered": 2, 00:12:02.729 "num_base_bdevs_operational": 2, 00:12:02.729 "base_bdevs_list": [ 00:12:02.729 { 00:12:02.729 "name": null, 00:12:02.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.729 "is_configured": false, 00:12:02.729 "data_offset": 0, 00:12:02.729 "data_size": 63488 00:12:02.729 }, 00:12:02.729 { 00:12:02.729 "name": "BaseBdev2", 00:12:02.729 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:02.729 "is_configured": true, 00:12:02.729 "data_offset": 2048, 00:12:02.729 "data_size": 63488 00:12:02.729 }, 00:12:02.729 { 00:12:02.729 "name": "BaseBdev3", 00:12:02.729 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:02.729 "is_configured": true, 00:12:02.729 "data_offset": 2048, 00:12:02.729 "data_size": 63488 00:12:02.729 } 00:12:02.729 ] 00:12:02.729 }' 00:12:02.729 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.729 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.292 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:03.292 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.292 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.292 [2024-11-26 19:52:53.941360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:03.292 [2024-11-26 19:52:53.951175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:12:03.292 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.292 19:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:03.292 [2024-11-26 19:52:53.955817] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:04.226 19:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.226 19:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.226 19:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.226 19:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.226 19:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.226 19:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.226 19:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.226 19:52:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.226 19:52:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.226 19:52:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.226 19:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.226 "name": "raid_bdev1", 00:12:04.226 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:04.226 "strip_size_kb": 64, 00:12:04.226 "state": "online", 00:12:04.226 "raid_level": "raid5f", 00:12:04.226 "superblock": true, 00:12:04.226 "num_base_bdevs": 3, 00:12:04.226 "num_base_bdevs_discovered": 3, 00:12:04.226 "num_base_bdevs_operational": 3, 00:12:04.226 "process": { 00:12:04.226 "type": "rebuild", 00:12:04.226 "target": "spare", 00:12:04.226 "progress": { 00:12:04.226 "blocks": 18432, 00:12:04.226 "percent": 14 00:12:04.226 } 00:12:04.226 }, 00:12:04.226 "base_bdevs_list": [ 00:12:04.226 { 00:12:04.226 "name": "spare", 00:12:04.226 "uuid": "f9f1499e-bc33-530f-916b-1a0a28fcd411", 00:12:04.226 "is_configured": true, 00:12:04.226 "data_offset": 2048, 00:12:04.226 "data_size": 63488 00:12:04.226 }, 00:12:04.226 { 00:12:04.226 "name": "BaseBdev2", 00:12:04.226 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:04.226 "is_configured": true, 00:12:04.226 "data_offset": 2048, 00:12:04.226 "data_size": 63488 00:12:04.226 }, 00:12:04.226 { 00:12:04.226 "name": "BaseBdev3", 00:12:04.227 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:04.227 "is_configured": true, 00:12:04.227 "data_offset": 2048, 00:12:04.227 "data_size": 63488 00:12:04.227 } 00:12:04.227 ] 00:12:04.227 }' 00:12:04.227 19:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.227 [2024-11-26 19:52:55.049032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:04.227 [2024-11-26 19:52:55.066177] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:04.227 [2024-11-26 19:52:55.066230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.227 [2024-11-26 19:52:55.066245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:04.227 [2024-11-26 19:52:55.066252] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.227 "name": "raid_bdev1", 00:12:04.227 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:04.227 "strip_size_kb": 64, 00:12:04.227 "state": "online", 00:12:04.227 "raid_level": "raid5f", 00:12:04.227 "superblock": true, 00:12:04.227 "num_base_bdevs": 3, 00:12:04.227 "num_base_bdevs_discovered": 2, 00:12:04.227 "num_base_bdevs_operational": 2, 00:12:04.227 "base_bdevs_list": [ 00:12:04.227 { 00:12:04.227 "name": null, 00:12:04.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.227 "is_configured": false, 00:12:04.227 "data_offset": 0, 00:12:04.227 "data_size": 63488 00:12:04.227 }, 00:12:04.227 { 00:12:04.227 "name": "BaseBdev2", 00:12:04.227 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:04.227 "is_configured": true, 00:12:04.227 "data_offset": 2048, 00:12:04.227 "data_size": 63488 00:12:04.227 }, 00:12:04.227 { 00:12:04.227 "name": "BaseBdev3", 00:12:04.227 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:04.227 "is_configured": true, 00:12:04.227 "data_offset": 2048, 00:12:04.227 "data_size": 63488 00:12:04.227 } 00:12:04.227 ] 00:12:04.227 }' 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.227 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.485 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:04.485 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.485 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:04.485 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:04.485 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.485 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.485 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.485 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.485 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.485 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.743 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.743 "name": "raid_bdev1", 00:12:04.743 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:04.743 "strip_size_kb": 64, 00:12:04.743 "state": "online", 00:12:04.743 "raid_level": "raid5f", 00:12:04.743 "superblock": true, 00:12:04.743 "num_base_bdevs": 3, 00:12:04.743 "num_base_bdevs_discovered": 2, 00:12:04.743 "num_base_bdevs_operational": 2, 00:12:04.743 "base_bdevs_list": [ 00:12:04.743 { 00:12:04.743 "name": null, 00:12:04.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.743 "is_configured": false, 00:12:04.743 "data_offset": 0, 00:12:04.743 "data_size": 63488 00:12:04.743 }, 00:12:04.743 { 00:12:04.743 "name": "BaseBdev2", 00:12:04.743 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:04.743 "is_configured": true, 00:12:04.743 "data_offset": 2048, 00:12:04.743 "data_size": 63488 00:12:04.743 }, 00:12:04.743 { 00:12:04.743 "name": "BaseBdev3", 00:12:04.743 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:04.743 "is_configured": true, 00:12:04.743 "data_offset": 2048, 00:12:04.743 "data_size": 63488 00:12:04.743 } 00:12:04.743 ] 00:12:04.743 }' 00:12:04.743 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.743 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:04.743 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.743 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:04.743 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:04.743 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.743 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.743 [2024-11-26 19:52:55.493442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:04.743 [2024-11-26 19:52:55.502097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:12:04.743 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.743 19:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:04.743 [2024-11-26 19:52:55.506560] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.681 "name": "raid_bdev1", 00:12:05.681 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:05.681 "strip_size_kb": 64, 00:12:05.681 "state": "online", 00:12:05.681 "raid_level": "raid5f", 00:12:05.681 "superblock": true, 00:12:05.681 "num_base_bdevs": 3, 00:12:05.681 "num_base_bdevs_discovered": 3, 00:12:05.681 "num_base_bdevs_operational": 3, 00:12:05.681 "process": { 00:12:05.681 "type": "rebuild", 00:12:05.681 "target": "spare", 00:12:05.681 "progress": { 00:12:05.681 "blocks": 18432, 00:12:05.681 "percent": 14 00:12:05.681 } 00:12:05.681 }, 00:12:05.681 "base_bdevs_list": [ 00:12:05.681 { 00:12:05.681 "name": "spare", 00:12:05.681 "uuid": "f9f1499e-bc33-530f-916b-1a0a28fcd411", 00:12:05.681 "is_configured": true, 00:12:05.681 "data_offset": 2048, 00:12:05.681 "data_size": 63488 00:12:05.681 }, 00:12:05.681 { 00:12:05.681 "name": "BaseBdev2", 00:12:05.681 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:05.681 "is_configured": true, 00:12:05.681 "data_offset": 2048, 00:12:05.681 "data_size": 63488 00:12:05.681 }, 00:12:05.681 { 00:12:05.681 "name": "BaseBdev3", 00:12:05.681 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:05.681 "is_configured": true, 00:12:05.681 "data_offset": 2048, 00:12:05.681 "data_size": 63488 00:12:05.681 } 00:12:05.681 ] 00:12:05.681 }' 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:05.681 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=446 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.681 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.939 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.939 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.939 "name": "raid_bdev1", 00:12:05.939 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:05.939 "strip_size_kb": 64, 00:12:05.939 "state": "online", 00:12:05.939 "raid_level": "raid5f", 00:12:05.939 "superblock": true, 00:12:05.939 "num_base_bdevs": 3, 00:12:05.939 "num_base_bdevs_discovered": 3, 00:12:05.939 "num_base_bdevs_operational": 3, 00:12:05.939 "process": { 00:12:05.939 "type": "rebuild", 00:12:05.939 "target": "spare", 00:12:05.939 "progress": { 00:12:05.939 "blocks": 20480, 00:12:05.939 "percent": 16 00:12:05.939 } 00:12:05.939 }, 00:12:05.939 "base_bdevs_list": [ 00:12:05.939 { 00:12:05.939 "name": "spare", 00:12:05.939 "uuid": "f9f1499e-bc33-530f-916b-1a0a28fcd411", 00:12:05.939 "is_configured": true, 00:12:05.939 "data_offset": 2048, 00:12:05.939 "data_size": 63488 00:12:05.939 }, 00:12:05.939 { 00:12:05.939 "name": "BaseBdev2", 00:12:05.939 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:05.939 "is_configured": true, 00:12:05.939 "data_offset": 2048, 00:12:05.939 "data_size": 63488 00:12:05.939 }, 00:12:05.939 { 00:12:05.939 "name": "BaseBdev3", 00:12:05.939 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:05.939 "is_configured": true, 00:12:05.939 "data_offset": 2048, 00:12:05.939 "data_size": 63488 00:12:05.939 } 00:12:05.939 ] 00:12:05.939 }' 00:12:05.939 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.939 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:05.939 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.939 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:05.939 19:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.874 "name": "raid_bdev1", 00:12:06.874 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:06.874 "strip_size_kb": 64, 00:12:06.874 "state": "online", 00:12:06.874 "raid_level": "raid5f", 00:12:06.874 "superblock": true, 00:12:06.874 "num_base_bdevs": 3, 00:12:06.874 "num_base_bdevs_discovered": 3, 00:12:06.874 "num_base_bdevs_operational": 3, 00:12:06.874 "process": { 00:12:06.874 "type": "rebuild", 00:12:06.874 "target": "spare", 00:12:06.874 "progress": { 00:12:06.874 "blocks": 43008, 00:12:06.874 "percent": 33 00:12:06.874 } 00:12:06.874 }, 00:12:06.874 "base_bdevs_list": [ 00:12:06.874 { 00:12:06.874 "name": "spare", 00:12:06.874 "uuid": "f9f1499e-bc33-530f-916b-1a0a28fcd411", 00:12:06.874 "is_configured": true, 00:12:06.874 "data_offset": 2048, 00:12:06.874 "data_size": 63488 00:12:06.874 }, 00:12:06.874 { 00:12:06.874 "name": "BaseBdev2", 00:12:06.874 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:06.874 "is_configured": true, 00:12:06.874 "data_offset": 2048, 00:12:06.874 "data_size": 63488 00:12:06.874 }, 00:12:06.874 { 00:12:06.874 "name": "BaseBdev3", 00:12:06.874 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:06.874 "is_configured": true, 00:12:06.874 "data_offset": 2048, 00:12:06.874 "data_size": 63488 00:12:06.874 } 00:12:06.874 ] 00:12:06.874 }' 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:06.874 19:52:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.247 "name": "raid_bdev1", 00:12:08.247 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:08.247 "strip_size_kb": 64, 00:12:08.247 "state": "online", 00:12:08.247 "raid_level": "raid5f", 00:12:08.247 "superblock": true, 00:12:08.247 "num_base_bdevs": 3, 00:12:08.247 "num_base_bdevs_discovered": 3, 00:12:08.247 "num_base_bdevs_operational": 3, 00:12:08.247 "process": { 00:12:08.247 "type": "rebuild", 00:12:08.247 "target": "spare", 00:12:08.247 "progress": { 00:12:08.247 "blocks": 65536, 00:12:08.247 "percent": 51 00:12:08.247 } 00:12:08.247 }, 00:12:08.247 "base_bdevs_list": [ 00:12:08.247 { 00:12:08.247 "name": "spare", 00:12:08.247 "uuid": "f9f1499e-bc33-530f-916b-1a0a28fcd411", 00:12:08.247 "is_configured": true, 00:12:08.247 "data_offset": 2048, 00:12:08.247 "data_size": 63488 00:12:08.247 }, 00:12:08.247 { 00:12:08.247 "name": "BaseBdev2", 00:12:08.247 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:08.247 "is_configured": true, 00:12:08.247 "data_offset": 2048, 00:12:08.247 "data_size": 63488 00:12:08.247 }, 00:12:08.247 { 00:12:08.247 "name": "BaseBdev3", 00:12:08.247 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:08.247 "is_configured": true, 00:12:08.247 "data_offset": 2048, 00:12:08.247 "data_size": 63488 00:12:08.247 } 00:12:08.247 ] 00:12:08.247 }' 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.247 19:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.180 "name": "raid_bdev1", 00:12:09.180 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:09.180 "strip_size_kb": 64, 00:12:09.180 "state": "online", 00:12:09.180 "raid_level": "raid5f", 00:12:09.180 "superblock": true, 00:12:09.180 "num_base_bdevs": 3, 00:12:09.180 "num_base_bdevs_discovered": 3, 00:12:09.180 "num_base_bdevs_operational": 3, 00:12:09.180 "process": { 00:12:09.180 "type": "rebuild", 00:12:09.180 "target": "spare", 00:12:09.180 "progress": { 00:12:09.180 "blocks": 88064, 00:12:09.180 "percent": 69 00:12:09.180 } 00:12:09.180 }, 00:12:09.180 "base_bdevs_list": [ 00:12:09.180 { 00:12:09.180 "name": "spare", 00:12:09.180 "uuid": "f9f1499e-bc33-530f-916b-1a0a28fcd411", 00:12:09.180 "is_configured": true, 00:12:09.180 "data_offset": 2048, 00:12:09.180 "data_size": 63488 00:12:09.180 }, 00:12:09.180 { 00:12:09.180 "name": "BaseBdev2", 00:12:09.180 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:09.180 "is_configured": true, 00:12:09.180 "data_offset": 2048, 00:12:09.180 "data_size": 63488 00:12:09.180 }, 00:12:09.180 { 00:12:09.180 "name": "BaseBdev3", 00:12:09.180 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:09.180 "is_configured": true, 00:12:09.180 "data_offset": 2048, 00:12:09.180 "data_size": 63488 00:12:09.180 } 00:12:09.180 ] 00:12:09.180 }' 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:09.180 19:52:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.180 19:53:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:09.180 19:53:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:10.114 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:10.114 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.114 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.114 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.114 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.114 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.114 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.114 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.114 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.114 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.114 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.114 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.114 "name": "raid_bdev1", 00:12:10.114 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:10.114 "strip_size_kb": 64, 00:12:10.114 "state": "online", 00:12:10.114 "raid_level": "raid5f", 00:12:10.114 "superblock": true, 00:12:10.114 "num_base_bdevs": 3, 00:12:10.114 "num_base_bdevs_discovered": 3, 00:12:10.114 "num_base_bdevs_operational": 3, 00:12:10.114 "process": { 00:12:10.114 "type": "rebuild", 00:12:10.114 "target": "spare", 00:12:10.114 "progress": { 00:12:10.114 "blocks": 110592, 00:12:10.114 "percent": 87 00:12:10.114 } 00:12:10.114 }, 00:12:10.114 "base_bdevs_list": [ 00:12:10.114 { 00:12:10.114 "name": "spare", 00:12:10.114 "uuid": "f9f1499e-bc33-530f-916b-1a0a28fcd411", 00:12:10.114 "is_configured": true, 00:12:10.114 "data_offset": 2048, 00:12:10.114 "data_size": 63488 00:12:10.114 }, 00:12:10.114 { 00:12:10.114 "name": "BaseBdev2", 00:12:10.114 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:10.114 "is_configured": true, 00:12:10.114 "data_offset": 2048, 00:12:10.114 "data_size": 63488 00:12:10.114 }, 00:12:10.114 { 00:12:10.114 "name": "BaseBdev3", 00:12:10.114 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:10.114 "is_configured": true, 00:12:10.114 "data_offset": 2048, 00:12:10.114 "data_size": 63488 00:12:10.114 } 00:12:10.114 ] 00:12:10.114 }' 00:12:10.114 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.372 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.372 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.372 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.372 19:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:10.937 [2024-11-26 19:53:01.761240] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:10.937 [2024-11-26 19:53:01.761329] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:10.937 [2024-11-26 19:53:01.761460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.195 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:11.195 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.195 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.195 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.195 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.195 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.195 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.195 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.195 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.195 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.455 "name": "raid_bdev1", 00:12:11.455 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:11.455 "strip_size_kb": 64, 00:12:11.455 "state": "online", 00:12:11.455 "raid_level": "raid5f", 00:12:11.455 "superblock": true, 00:12:11.455 "num_base_bdevs": 3, 00:12:11.455 "num_base_bdevs_discovered": 3, 00:12:11.455 "num_base_bdevs_operational": 3, 00:12:11.455 "base_bdevs_list": [ 00:12:11.455 { 00:12:11.455 "name": "spare", 00:12:11.455 "uuid": "f9f1499e-bc33-530f-916b-1a0a28fcd411", 00:12:11.455 "is_configured": true, 00:12:11.455 "data_offset": 2048, 00:12:11.455 "data_size": 63488 00:12:11.455 }, 00:12:11.455 { 00:12:11.455 "name": "BaseBdev2", 00:12:11.455 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:11.455 "is_configured": true, 00:12:11.455 "data_offset": 2048, 00:12:11.455 "data_size": 63488 00:12:11.455 }, 00:12:11.455 { 00:12:11.455 "name": "BaseBdev3", 00:12:11.455 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:11.455 "is_configured": true, 00:12:11.455 "data_offset": 2048, 00:12:11.455 "data_size": 63488 00:12:11.455 } 00:12:11.455 ] 00:12:11.455 }' 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.455 "name": "raid_bdev1", 00:12:11.455 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:11.455 "strip_size_kb": 64, 00:12:11.455 "state": "online", 00:12:11.455 "raid_level": "raid5f", 00:12:11.455 "superblock": true, 00:12:11.455 "num_base_bdevs": 3, 00:12:11.455 "num_base_bdevs_discovered": 3, 00:12:11.455 "num_base_bdevs_operational": 3, 00:12:11.455 "base_bdevs_list": [ 00:12:11.455 { 00:12:11.455 "name": "spare", 00:12:11.455 "uuid": "f9f1499e-bc33-530f-916b-1a0a28fcd411", 00:12:11.455 "is_configured": true, 00:12:11.455 "data_offset": 2048, 00:12:11.455 "data_size": 63488 00:12:11.455 }, 00:12:11.455 { 00:12:11.455 "name": "BaseBdev2", 00:12:11.455 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:11.455 "is_configured": true, 00:12:11.455 "data_offset": 2048, 00:12:11.455 "data_size": 63488 00:12:11.455 }, 00:12:11.455 { 00:12:11.455 "name": "BaseBdev3", 00:12:11.455 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:11.455 "is_configured": true, 00:12:11.455 "data_offset": 2048, 00:12:11.455 "data_size": 63488 00:12:11.455 } 00:12:11.455 ] 00:12:11.455 }' 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.455 "name": "raid_bdev1", 00:12:11.455 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:11.455 "strip_size_kb": 64, 00:12:11.455 "state": "online", 00:12:11.455 "raid_level": "raid5f", 00:12:11.455 "superblock": true, 00:12:11.455 "num_base_bdevs": 3, 00:12:11.455 "num_base_bdevs_discovered": 3, 00:12:11.455 "num_base_bdevs_operational": 3, 00:12:11.455 "base_bdevs_list": [ 00:12:11.455 { 00:12:11.455 "name": "spare", 00:12:11.455 "uuid": "f9f1499e-bc33-530f-916b-1a0a28fcd411", 00:12:11.455 "is_configured": true, 00:12:11.455 "data_offset": 2048, 00:12:11.455 "data_size": 63488 00:12:11.455 }, 00:12:11.455 { 00:12:11.455 "name": "BaseBdev2", 00:12:11.455 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:11.455 "is_configured": true, 00:12:11.455 "data_offset": 2048, 00:12:11.455 "data_size": 63488 00:12:11.455 }, 00:12:11.455 { 00:12:11.455 "name": "BaseBdev3", 00:12:11.455 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:11.455 "is_configured": true, 00:12:11.455 "data_offset": 2048, 00:12:11.455 "data_size": 63488 00:12:11.455 } 00:12:11.455 ] 00:12:11.455 }' 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.455 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.713 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:11.713 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.713 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.714 [2024-11-26 19:53:02.644799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:11.714 [2024-11-26 19:53:02.644824] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.714 [2024-11-26 19:53:02.644904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.714 [2024-11-26 19:53:02.644980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.714 [2024-11-26 19:53:02.644994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:11.971 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.971 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.971 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:11.971 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:11.972 /dev/nbd0 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:11.972 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:12.229 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:12.229 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:12.229 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:12.229 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.229 1+0 records in 00:12:12.229 1+0 records out 00:12:12.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201398 s, 20.3 MB/s 00:12:12.229 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.229 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:12.229 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.229 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:12.229 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:12.229 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.229 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:12.229 19:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:12.229 /dev/nbd1 00:12:12.229 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:12.229 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:12.229 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:12.229 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:12.229 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:12.229 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:12.229 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:12.230 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:12.230 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:12.230 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:12.230 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.230 1+0 records in 00:12:12.230 1+0 records out 00:12:12.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314027 s, 13.0 MB/s 00:12:12.230 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.230 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:12.230 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.488 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:12.488 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:12.488 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.488 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:12.488 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:12.488 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:12.488 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:12.488 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:12.488 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:12.488 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:12.488 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.488 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:12.746 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:12.746 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:12.746 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:12.746 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.746 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.746 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:12.746 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:12.746 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.746 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.746 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.004 [2024-11-26 19:53:03.711380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:13.004 [2024-11-26 19:53:03.711433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.004 [2024-11-26 19:53:03.711452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:13.004 [2024-11-26 19:53:03.711462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.004 [2024-11-26 19:53:03.713477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.004 [2024-11-26 19:53:03.713591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:13.004 [2024-11-26 19:53:03.713682] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:13.004 [2024-11-26 19:53:03.713728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:13.004 [2024-11-26 19:53:03.713843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.004 [2024-11-26 19:53:03.713923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.004 spare 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.004 [2024-11-26 19:53:03.814000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:13.004 [2024-11-26 19:53:03.814024] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:13.004 [2024-11-26 19:53:03.814279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:12:13.004 [2024-11-26 19:53:03.817195] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:13.004 [2024-11-26 19:53:03.817287] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:13.004 [2024-11-26 19:53:03.817475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.004 "name": "raid_bdev1", 00:12:13.004 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:13.004 "strip_size_kb": 64, 00:12:13.004 "state": "online", 00:12:13.004 "raid_level": "raid5f", 00:12:13.004 "superblock": true, 00:12:13.004 "num_base_bdevs": 3, 00:12:13.004 "num_base_bdevs_discovered": 3, 00:12:13.004 "num_base_bdevs_operational": 3, 00:12:13.004 "base_bdevs_list": [ 00:12:13.004 { 00:12:13.004 "name": "spare", 00:12:13.004 "uuid": "f9f1499e-bc33-530f-916b-1a0a28fcd411", 00:12:13.004 "is_configured": true, 00:12:13.004 "data_offset": 2048, 00:12:13.004 "data_size": 63488 00:12:13.004 }, 00:12:13.004 { 00:12:13.004 "name": "BaseBdev2", 00:12:13.004 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:13.004 "is_configured": true, 00:12:13.004 "data_offset": 2048, 00:12:13.004 "data_size": 63488 00:12:13.004 }, 00:12:13.004 { 00:12:13.004 "name": "BaseBdev3", 00:12:13.004 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:13.004 "is_configured": true, 00:12:13.004 "data_offset": 2048, 00:12:13.004 "data_size": 63488 00:12:13.004 } 00:12:13.004 ] 00:12:13.004 }' 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.004 19:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.262 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.262 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.262 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.262 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.262 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.262 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.262 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.262 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.262 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.262 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.262 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.262 "name": "raid_bdev1", 00:12:13.262 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:13.262 "strip_size_kb": 64, 00:12:13.262 "state": "online", 00:12:13.262 "raid_level": "raid5f", 00:12:13.262 "superblock": true, 00:12:13.262 "num_base_bdevs": 3, 00:12:13.262 "num_base_bdevs_discovered": 3, 00:12:13.262 "num_base_bdevs_operational": 3, 00:12:13.262 "base_bdevs_list": [ 00:12:13.262 { 00:12:13.262 "name": "spare", 00:12:13.262 "uuid": "f9f1499e-bc33-530f-916b-1a0a28fcd411", 00:12:13.262 "is_configured": true, 00:12:13.262 "data_offset": 2048, 00:12:13.262 "data_size": 63488 00:12:13.262 }, 00:12:13.262 { 00:12:13.262 "name": "BaseBdev2", 00:12:13.262 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:13.262 "is_configured": true, 00:12:13.262 "data_offset": 2048, 00:12:13.262 "data_size": 63488 00:12:13.262 }, 00:12:13.262 { 00:12:13.262 "name": "BaseBdev3", 00:12:13.262 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:13.262 "is_configured": true, 00:12:13.262 "data_offset": 2048, 00:12:13.262 "data_size": 63488 00:12:13.262 } 00:12:13.262 ] 00:12:13.262 }' 00:12:13.262 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.521 [2024-11-26 19:53:04.293533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.521 "name": "raid_bdev1", 00:12:13.521 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:13.521 "strip_size_kb": 64, 00:12:13.521 "state": "online", 00:12:13.521 "raid_level": "raid5f", 00:12:13.521 "superblock": true, 00:12:13.521 "num_base_bdevs": 3, 00:12:13.521 "num_base_bdevs_discovered": 2, 00:12:13.521 "num_base_bdevs_operational": 2, 00:12:13.521 "base_bdevs_list": [ 00:12:13.521 { 00:12:13.521 "name": null, 00:12:13.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.521 "is_configured": false, 00:12:13.521 "data_offset": 0, 00:12:13.521 "data_size": 63488 00:12:13.521 }, 00:12:13.521 { 00:12:13.521 "name": "BaseBdev2", 00:12:13.521 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:13.521 "is_configured": true, 00:12:13.521 "data_offset": 2048, 00:12:13.521 "data_size": 63488 00:12:13.521 }, 00:12:13.521 { 00:12:13.521 "name": "BaseBdev3", 00:12:13.521 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:13.521 "is_configured": true, 00:12:13.521 "data_offset": 2048, 00:12:13.521 "data_size": 63488 00:12:13.521 } 00:12:13.521 ] 00:12:13.521 }' 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.521 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.779 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:13.779 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.779 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.779 [2024-11-26 19:53:04.617621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:13.779 [2024-11-26 19:53:04.617801] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:13.779 [2024-11-26 19:53:04.617821] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:13.779 [2024-11-26 19:53:04.617854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:13.779 [2024-11-26 19:53:04.626324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:12:13.779 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.779 19:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:13.779 [2024-11-26 19:53:04.630844] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:14.715 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.715 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.715 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.715 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.715 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.715 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.715 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.715 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.715 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.715 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.973 "name": "raid_bdev1", 00:12:14.973 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:14.973 "strip_size_kb": 64, 00:12:14.973 "state": "online", 00:12:14.973 "raid_level": "raid5f", 00:12:14.973 "superblock": true, 00:12:14.973 "num_base_bdevs": 3, 00:12:14.973 "num_base_bdevs_discovered": 3, 00:12:14.973 "num_base_bdevs_operational": 3, 00:12:14.973 "process": { 00:12:14.973 "type": "rebuild", 00:12:14.973 "target": "spare", 00:12:14.973 "progress": { 00:12:14.973 "blocks": 20480, 00:12:14.973 "percent": 16 00:12:14.973 } 00:12:14.973 }, 00:12:14.973 "base_bdevs_list": [ 00:12:14.973 { 00:12:14.973 "name": "spare", 00:12:14.973 "uuid": "f9f1499e-bc33-530f-916b-1a0a28fcd411", 00:12:14.973 "is_configured": true, 00:12:14.973 "data_offset": 2048, 00:12:14.973 "data_size": 63488 00:12:14.973 }, 00:12:14.973 { 00:12:14.973 "name": "BaseBdev2", 00:12:14.973 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:14.973 "is_configured": true, 00:12:14.973 "data_offset": 2048, 00:12:14.973 "data_size": 63488 00:12:14.973 }, 00:12:14.973 { 00:12:14.973 "name": "BaseBdev3", 00:12:14.973 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:14.973 "is_configured": true, 00:12:14.973 "data_offset": 2048, 00:12:14.973 "data_size": 63488 00:12:14.973 } 00:12:14.973 ] 00:12:14.973 }' 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.973 [2024-11-26 19:53:05.744109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.973 [2024-11-26 19:53:05.841132] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:14.973 [2024-11-26 19:53:05.841191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.973 [2024-11-26 19:53:05.841205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.973 [2024-11-26 19:53:05.841214] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.973 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.974 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.974 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.974 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.974 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.974 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.974 "name": "raid_bdev1", 00:12:14.974 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:14.974 "strip_size_kb": 64, 00:12:14.974 "state": "online", 00:12:14.974 "raid_level": "raid5f", 00:12:14.974 "superblock": true, 00:12:14.974 "num_base_bdevs": 3, 00:12:14.974 "num_base_bdevs_discovered": 2, 00:12:14.974 "num_base_bdevs_operational": 2, 00:12:14.974 "base_bdevs_list": [ 00:12:14.974 { 00:12:14.974 "name": null, 00:12:14.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.974 "is_configured": false, 00:12:14.974 "data_offset": 0, 00:12:14.974 "data_size": 63488 00:12:14.974 }, 00:12:14.974 { 00:12:14.974 "name": "BaseBdev2", 00:12:14.974 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:14.974 "is_configured": true, 00:12:14.974 "data_offset": 2048, 00:12:14.974 "data_size": 63488 00:12:14.974 }, 00:12:14.974 { 00:12:14.974 "name": "BaseBdev3", 00:12:14.974 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:14.974 "is_configured": true, 00:12:14.974 "data_offset": 2048, 00:12:14.974 "data_size": 63488 00:12:14.974 } 00:12:14.974 ] 00:12:14.974 }' 00:12:14.974 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.974 19:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.541 19:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:15.541 19:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.541 19:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.541 [2024-11-26 19:53:06.184118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:15.541 [2024-11-26 19:53:06.184182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.541 [2024-11-26 19:53:06.184202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:12:15.541 [2024-11-26 19:53:06.184214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.541 [2024-11-26 19:53:06.184671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.541 [2024-11-26 19:53:06.184696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:15.541 [2024-11-26 19:53:06.184787] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:15.541 [2024-11-26 19:53:06.184806] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:15.541 [2024-11-26 19:53:06.184815] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:15.541 [2024-11-26 19:53:06.184838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:15.541 [2024-11-26 19:53:06.193194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:12:15.541 spare 00:12:15.541 19:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.541 19:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:15.542 [2024-11-26 19:53:06.197567] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.476 "name": "raid_bdev1", 00:12:16.476 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:16.476 "strip_size_kb": 64, 00:12:16.476 "state": "online", 00:12:16.476 "raid_level": "raid5f", 00:12:16.476 "superblock": true, 00:12:16.476 "num_base_bdevs": 3, 00:12:16.476 "num_base_bdevs_discovered": 3, 00:12:16.476 "num_base_bdevs_operational": 3, 00:12:16.476 "process": { 00:12:16.476 "type": "rebuild", 00:12:16.476 "target": "spare", 00:12:16.476 "progress": { 00:12:16.476 "blocks": 20480, 00:12:16.476 "percent": 16 00:12:16.476 } 00:12:16.476 }, 00:12:16.476 "base_bdevs_list": [ 00:12:16.476 { 00:12:16.476 "name": "spare", 00:12:16.476 "uuid": "f9f1499e-bc33-530f-916b-1a0a28fcd411", 00:12:16.476 "is_configured": true, 00:12:16.476 "data_offset": 2048, 00:12:16.476 "data_size": 63488 00:12:16.476 }, 00:12:16.476 { 00:12:16.476 "name": "BaseBdev2", 00:12:16.476 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:16.476 "is_configured": true, 00:12:16.476 "data_offset": 2048, 00:12:16.476 "data_size": 63488 00:12:16.476 }, 00:12:16.476 { 00:12:16.476 "name": "BaseBdev3", 00:12:16.476 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:16.476 "is_configured": true, 00:12:16.476 "data_offset": 2048, 00:12:16.476 "data_size": 63488 00:12:16.476 } 00:12:16.476 ] 00:12:16.476 }' 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.476 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.476 [2024-11-26 19:53:07.310420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:16.476 [2024-11-26 19:53:07.408117] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:16.476 [2024-11-26 19:53:07.408171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.476 [2024-11-26 19:53:07.408187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:16.476 [2024-11-26 19:53:07.408194] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.735 "name": "raid_bdev1", 00:12:16.735 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:16.735 "strip_size_kb": 64, 00:12:16.735 "state": "online", 00:12:16.735 "raid_level": "raid5f", 00:12:16.735 "superblock": true, 00:12:16.735 "num_base_bdevs": 3, 00:12:16.735 "num_base_bdevs_discovered": 2, 00:12:16.735 "num_base_bdevs_operational": 2, 00:12:16.735 "base_bdevs_list": [ 00:12:16.735 { 00:12:16.735 "name": null, 00:12:16.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.735 "is_configured": false, 00:12:16.735 "data_offset": 0, 00:12:16.735 "data_size": 63488 00:12:16.735 }, 00:12:16.735 { 00:12:16.735 "name": "BaseBdev2", 00:12:16.735 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:16.735 "is_configured": true, 00:12:16.735 "data_offset": 2048, 00:12:16.735 "data_size": 63488 00:12:16.735 }, 00:12:16.735 { 00:12:16.735 "name": "BaseBdev3", 00:12:16.735 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:16.735 "is_configured": true, 00:12:16.735 "data_offset": 2048, 00:12:16.735 "data_size": 63488 00:12:16.735 } 00:12:16.735 ] 00:12:16.735 }' 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.735 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.993 "name": "raid_bdev1", 00:12:16.993 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:16.993 "strip_size_kb": 64, 00:12:16.993 "state": "online", 00:12:16.993 "raid_level": "raid5f", 00:12:16.993 "superblock": true, 00:12:16.993 "num_base_bdevs": 3, 00:12:16.993 "num_base_bdevs_discovered": 2, 00:12:16.993 "num_base_bdevs_operational": 2, 00:12:16.993 "base_bdevs_list": [ 00:12:16.993 { 00:12:16.993 "name": null, 00:12:16.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.993 "is_configured": false, 00:12:16.993 "data_offset": 0, 00:12:16.993 "data_size": 63488 00:12:16.993 }, 00:12:16.993 { 00:12:16.993 "name": "BaseBdev2", 00:12:16.993 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:16.993 "is_configured": true, 00:12:16.993 "data_offset": 2048, 00:12:16.993 "data_size": 63488 00:12:16.993 }, 00:12:16.993 { 00:12:16.993 "name": "BaseBdev3", 00:12:16.993 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:16.993 "is_configured": true, 00:12:16.993 "data_offset": 2048, 00:12:16.993 "data_size": 63488 00:12:16.993 } 00:12:16.993 ] 00:12:16.993 }' 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.993 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.993 [2024-11-26 19:53:07.859124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:16.993 [2024-11-26 19:53:07.859174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.993 [2024-11-26 19:53:07.859197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:16.993 [2024-11-26 19:53:07.859204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.994 [2024-11-26 19:53:07.859642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.994 [2024-11-26 19:53:07.859664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:16.994 [2024-11-26 19:53:07.859734] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:16.994 [2024-11-26 19:53:07.859748] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:16.994 [2024-11-26 19:53:07.859758] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:16.994 [2024-11-26 19:53:07.859777] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:16.994 BaseBdev1 00:12:16.994 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.994 19:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.369 "name": "raid_bdev1", 00:12:18.369 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:18.369 "strip_size_kb": 64, 00:12:18.369 "state": "online", 00:12:18.369 "raid_level": "raid5f", 00:12:18.369 "superblock": true, 00:12:18.369 "num_base_bdevs": 3, 00:12:18.369 "num_base_bdevs_discovered": 2, 00:12:18.369 "num_base_bdevs_operational": 2, 00:12:18.369 "base_bdevs_list": [ 00:12:18.369 { 00:12:18.369 "name": null, 00:12:18.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.369 "is_configured": false, 00:12:18.369 "data_offset": 0, 00:12:18.369 "data_size": 63488 00:12:18.369 }, 00:12:18.369 { 00:12:18.369 "name": "BaseBdev2", 00:12:18.369 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:18.369 "is_configured": true, 00:12:18.369 "data_offset": 2048, 00:12:18.369 "data_size": 63488 00:12:18.369 }, 00:12:18.369 { 00:12:18.369 "name": "BaseBdev3", 00:12:18.369 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:18.369 "is_configured": true, 00:12:18.369 "data_offset": 2048, 00:12:18.369 "data_size": 63488 00:12:18.369 } 00:12:18.369 ] 00:12:18.369 }' 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.369 19:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.369 "name": "raid_bdev1", 00:12:18.369 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:18.369 "strip_size_kb": 64, 00:12:18.369 "state": "online", 00:12:18.369 "raid_level": "raid5f", 00:12:18.369 "superblock": true, 00:12:18.369 "num_base_bdevs": 3, 00:12:18.369 "num_base_bdevs_discovered": 2, 00:12:18.369 "num_base_bdevs_operational": 2, 00:12:18.369 "base_bdevs_list": [ 00:12:18.369 { 00:12:18.369 "name": null, 00:12:18.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.369 "is_configured": false, 00:12:18.369 "data_offset": 0, 00:12:18.369 "data_size": 63488 00:12:18.369 }, 00:12:18.369 { 00:12:18.369 "name": "BaseBdev2", 00:12:18.369 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:18.369 "is_configured": true, 00:12:18.369 "data_offset": 2048, 00:12:18.369 "data_size": 63488 00:12:18.369 }, 00:12:18.369 { 00:12:18.369 "name": "BaseBdev3", 00:12:18.369 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:18.369 "is_configured": true, 00:12:18.369 "data_offset": 2048, 00:12:18.369 "data_size": 63488 00:12:18.369 } 00:12:18.369 ] 00:12:18.369 }' 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.369 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.369 [2024-11-26 19:53:09.283447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.370 [2024-11-26 19:53:09.283599] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:18.370 [2024-11-26 19:53:09.283611] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:18.370 request: 00:12:18.370 { 00:12:18.370 "base_bdev": "BaseBdev1", 00:12:18.370 "raid_bdev": "raid_bdev1", 00:12:18.370 "method": "bdev_raid_add_base_bdev", 00:12:18.370 "req_id": 1 00:12:18.370 } 00:12:18.370 Got JSON-RPC error response 00:12:18.370 response: 00:12:18.370 { 00:12:18.370 "code": -22, 00:12:18.370 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:18.370 } 00:12:18.370 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:18.370 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:18.370 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:18.370 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:18.370 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:18.370 19:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:19.742 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:19.742 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.742 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.742 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.743 "name": "raid_bdev1", 00:12:19.743 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:19.743 "strip_size_kb": 64, 00:12:19.743 "state": "online", 00:12:19.743 "raid_level": "raid5f", 00:12:19.743 "superblock": true, 00:12:19.743 "num_base_bdevs": 3, 00:12:19.743 "num_base_bdevs_discovered": 2, 00:12:19.743 "num_base_bdevs_operational": 2, 00:12:19.743 "base_bdevs_list": [ 00:12:19.743 { 00:12:19.743 "name": null, 00:12:19.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.743 "is_configured": false, 00:12:19.743 "data_offset": 0, 00:12:19.743 "data_size": 63488 00:12:19.743 }, 00:12:19.743 { 00:12:19.743 "name": "BaseBdev2", 00:12:19.743 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:19.743 "is_configured": true, 00:12:19.743 "data_offset": 2048, 00:12:19.743 "data_size": 63488 00:12:19.743 }, 00:12:19.743 { 00:12:19.743 "name": "BaseBdev3", 00:12:19.743 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:19.743 "is_configured": true, 00:12:19.743 "data_offset": 2048, 00:12:19.743 "data_size": 63488 00:12:19.743 } 00:12:19.743 ] 00:12:19.743 }' 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.743 "name": "raid_bdev1", 00:12:19.743 "uuid": "de858b71-3779-4fce-b7a4-8ab1434ac201", 00:12:19.743 "strip_size_kb": 64, 00:12:19.743 "state": "online", 00:12:19.743 "raid_level": "raid5f", 00:12:19.743 "superblock": true, 00:12:19.743 "num_base_bdevs": 3, 00:12:19.743 "num_base_bdevs_discovered": 2, 00:12:19.743 "num_base_bdevs_operational": 2, 00:12:19.743 "base_bdevs_list": [ 00:12:19.743 { 00:12:19.743 "name": null, 00:12:19.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.743 "is_configured": false, 00:12:19.743 "data_offset": 0, 00:12:19.743 "data_size": 63488 00:12:19.743 }, 00:12:19.743 { 00:12:19.743 "name": "BaseBdev2", 00:12:19.743 "uuid": "37cf5a26-4876-5341-93c0-3aa200d77da4", 00:12:19.743 "is_configured": true, 00:12:19.743 "data_offset": 2048, 00:12:19.743 "data_size": 63488 00:12:19.743 }, 00:12:19.743 { 00:12:19.743 "name": "BaseBdev3", 00:12:19.743 "uuid": "38dd57c5-b60c-5acf-883b-ab7ac7b6fc50", 00:12:19.743 "is_configured": true, 00:12:19.743 "data_offset": 2048, 00:12:19.743 "data_size": 63488 00:12:19.743 } 00:12:19.743 ] 00:12:19.743 }' 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:19.743 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.002 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:20.002 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 79678 00:12:20.002 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 79678 ']' 00:12:20.002 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 79678 00:12:20.002 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:20.002 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.002 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79678 00:12:20.002 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.002 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.002 killing process with pid 79678 00:12:20.002 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79678' 00:12:20.002 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 79678 00:12:20.002 Received shutdown signal, test time was about 60.000000 seconds 00:12:20.002 00:12:20.002 Latency(us) 00:12:20.002 [2024-11-26T19:53:10.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.002 [2024-11-26T19:53:10.939Z] =================================================================================================================== 00:12:20.002 [2024-11-26T19:53:10.939Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:20.002 [2024-11-26 19:53:10.725783] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.002 [2024-11-26 19:53:10.725900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.002 [2024-11-26 19:53:10.725967] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.002 [2024-11-26 19:53:10.725990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:20.002 19:53:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 79678 00:12:20.002 [2024-11-26 19:53:10.931421] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.937 19:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:20.937 00:12:20.937 real 0m20.161s 00:12:20.937 user 0m25.112s 00:12:20.937 sys 0m1.976s 00:12:20.937 19:53:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.937 19:53:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.937 ************************************ 00:12:20.937 END TEST raid5f_rebuild_test_sb 00:12:20.937 ************************************ 00:12:20.937 19:53:11 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:12:20.937 19:53:11 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:12:20.937 19:53:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:20.937 19:53:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.937 19:53:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.937 ************************************ 00:12:20.937 START TEST raid5f_state_function_test 00:12:20.937 ************************************ 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80402 00:12:20.937 Process raid pid: 80402 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80402' 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80402 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80402 ']' 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.937 19:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.937 [2024-11-26 19:53:11.687545] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:12:20.937 [2024-11-26 19:53:11.688071] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.937 [2024-11-26 19:53:11.843180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.196 [2024-11-26 19:53:11.943283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.196 [2024-11-26 19:53:12.065083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.196 [2024-11-26 19:53:12.065121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.762 [2024-11-26 19:53:12.470667] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:21.762 [2024-11-26 19:53:12.470718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:21.762 [2024-11-26 19:53:12.470727] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:21.762 [2024-11-26 19:53:12.470736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:21.762 [2024-11-26 19:53:12.470741] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:21.762 [2024-11-26 19:53:12.470748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:21.762 [2024-11-26 19:53:12.470753] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:21.762 [2024-11-26 19:53:12.470760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.762 "name": "Existed_Raid", 00:12:21.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.762 "strip_size_kb": 64, 00:12:21.762 "state": "configuring", 00:12:21.762 "raid_level": "raid5f", 00:12:21.762 "superblock": false, 00:12:21.762 "num_base_bdevs": 4, 00:12:21.762 "num_base_bdevs_discovered": 0, 00:12:21.762 "num_base_bdevs_operational": 4, 00:12:21.762 "base_bdevs_list": [ 00:12:21.762 { 00:12:21.762 "name": "BaseBdev1", 00:12:21.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.762 "is_configured": false, 00:12:21.762 "data_offset": 0, 00:12:21.762 "data_size": 0 00:12:21.762 }, 00:12:21.762 { 00:12:21.762 "name": "BaseBdev2", 00:12:21.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.762 "is_configured": false, 00:12:21.762 "data_offset": 0, 00:12:21.762 "data_size": 0 00:12:21.762 }, 00:12:21.762 { 00:12:21.762 "name": "BaseBdev3", 00:12:21.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.762 "is_configured": false, 00:12:21.762 "data_offset": 0, 00:12:21.762 "data_size": 0 00:12:21.762 }, 00:12:21.762 { 00:12:21.762 "name": "BaseBdev4", 00:12:21.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.762 "is_configured": false, 00:12:21.762 "data_offset": 0, 00:12:21.762 "data_size": 0 00:12:21.762 } 00:12:21.762 ] 00:12:21.762 }' 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.762 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.021 [2024-11-26 19:53:12.782684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:22.021 [2024-11-26 19:53:12.782722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.021 [2024-11-26 19:53:12.790682] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:22.021 [2024-11-26 19:53:12.790719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:22.021 [2024-11-26 19:53:12.790727] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:22.021 [2024-11-26 19:53:12.790734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:22.021 [2024-11-26 19:53:12.790739] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:22.021 [2024-11-26 19:53:12.790746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:22.021 [2024-11-26 19:53:12.790752] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:22.021 [2024-11-26 19:53:12.790759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.021 [2024-11-26 19:53:12.820896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.021 BaseBdev1 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:22.021 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.022 [ 00:12:22.022 { 00:12:22.022 "name": "BaseBdev1", 00:12:22.022 "aliases": [ 00:12:22.022 "4f751151-cf58-4638-8f61-f9b0a7968b23" 00:12:22.022 ], 00:12:22.022 "product_name": "Malloc disk", 00:12:22.022 "block_size": 512, 00:12:22.022 "num_blocks": 65536, 00:12:22.022 "uuid": "4f751151-cf58-4638-8f61-f9b0a7968b23", 00:12:22.022 "assigned_rate_limits": { 00:12:22.022 "rw_ios_per_sec": 0, 00:12:22.022 "rw_mbytes_per_sec": 0, 00:12:22.022 "r_mbytes_per_sec": 0, 00:12:22.022 "w_mbytes_per_sec": 0 00:12:22.022 }, 00:12:22.022 "claimed": true, 00:12:22.022 "claim_type": "exclusive_write", 00:12:22.022 "zoned": false, 00:12:22.022 "supported_io_types": { 00:12:22.022 "read": true, 00:12:22.022 "write": true, 00:12:22.022 "unmap": true, 00:12:22.022 "flush": true, 00:12:22.022 "reset": true, 00:12:22.022 "nvme_admin": false, 00:12:22.022 "nvme_io": false, 00:12:22.022 "nvme_io_md": false, 00:12:22.022 "write_zeroes": true, 00:12:22.022 "zcopy": true, 00:12:22.022 "get_zone_info": false, 00:12:22.022 "zone_management": false, 00:12:22.022 "zone_append": false, 00:12:22.022 "compare": false, 00:12:22.022 "compare_and_write": false, 00:12:22.022 "abort": true, 00:12:22.022 "seek_hole": false, 00:12:22.022 "seek_data": false, 00:12:22.022 "copy": true, 00:12:22.022 "nvme_iov_md": false 00:12:22.022 }, 00:12:22.022 "memory_domains": [ 00:12:22.022 { 00:12:22.022 "dma_device_id": "system", 00:12:22.022 "dma_device_type": 1 00:12:22.022 }, 00:12:22.022 { 00:12:22.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.022 "dma_device_type": 2 00:12:22.022 } 00:12:22.022 ], 00:12:22.022 "driver_specific": {} 00:12:22.022 } 00:12:22.022 ] 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.022 "name": "Existed_Raid", 00:12:22.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.022 "strip_size_kb": 64, 00:12:22.022 "state": "configuring", 00:12:22.022 "raid_level": "raid5f", 00:12:22.022 "superblock": false, 00:12:22.022 "num_base_bdevs": 4, 00:12:22.022 "num_base_bdevs_discovered": 1, 00:12:22.022 "num_base_bdevs_operational": 4, 00:12:22.022 "base_bdevs_list": [ 00:12:22.022 { 00:12:22.022 "name": "BaseBdev1", 00:12:22.022 "uuid": "4f751151-cf58-4638-8f61-f9b0a7968b23", 00:12:22.022 "is_configured": true, 00:12:22.022 "data_offset": 0, 00:12:22.022 "data_size": 65536 00:12:22.022 }, 00:12:22.022 { 00:12:22.022 "name": "BaseBdev2", 00:12:22.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.022 "is_configured": false, 00:12:22.022 "data_offset": 0, 00:12:22.022 "data_size": 0 00:12:22.022 }, 00:12:22.022 { 00:12:22.022 "name": "BaseBdev3", 00:12:22.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.022 "is_configured": false, 00:12:22.022 "data_offset": 0, 00:12:22.022 "data_size": 0 00:12:22.022 }, 00:12:22.022 { 00:12:22.022 "name": "BaseBdev4", 00:12:22.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.022 "is_configured": false, 00:12:22.022 "data_offset": 0, 00:12:22.022 "data_size": 0 00:12:22.022 } 00:12:22.022 ] 00:12:22.022 }' 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.022 19:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.335 [2024-11-26 19:53:13.160995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:22.335 [2024-11-26 19:53:13.161048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.335 [2024-11-26 19:53:13.169045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.335 [2024-11-26 19:53:13.170694] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:22.335 [2024-11-26 19:53:13.170733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:22.335 [2024-11-26 19:53:13.170742] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:22.335 [2024-11-26 19:53:13.170750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:22.335 [2024-11-26 19:53:13.170756] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:22.335 [2024-11-26 19:53:13.170763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.335 "name": "Existed_Raid", 00:12:22.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.335 "strip_size_kb": 64, 00:12:22.335 "state": "configuring", 00:12:22.335 "raid_level": "raid5f", 00:12:22.335 "superblock": false, 00:12:22.335 "num_base_bdevs": 4, 00:12:22.335 "num_base_bdevs_discovered": 1, 00:12:22.335 "num_base_bdevs_operational": 4, 00:12:22.335 "base_bdevs_list": [ 00:12:22.335 { 00:12:22.335 "name": "BaseBdev1", 00:12:22.335 "uuid": "4f751151-cf58-4638-8f61-f9b0a7968b23", 00:12:22.335 "is_configured": true, 00:12:22.335 "data_offset": 0, 00:12:22.335 "data_size": 65536 00:12:22.335 }, 00:12:22.335 { 00:12:22.335 "name": "BaseBdev2", 00:12:22.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.335 "is_configured": false, 00:12:22.335 "data_offset": 0, 00:12:22.335 "data_size": 0 00:12:22.335 }, 00:12:22.335 { 00:12:22.335 "name": "BaseBdev3", 00:12:22.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.335 "is_configured": false, 00:12:22.335 "data_offset": 0, 00:12:22.335 "data_size": 0 00:12:22.335 }, 00:12:22.335 { 00:12:22.335 "name": "BaseBdev4", 00:12:22.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.335 "is_configured": false, 00:12:22.335 "data_offset": 0, 00:12:22.335 "data_size": 0 00:12:22.335 } 00:12:22.335 ] 00:12:22.335 }' 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.335 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.641 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:22.641 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.641 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.641 [2024-11-26 19:53:13.513483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.641 BaseBdev2 00:12:22.641 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.642 [ 00:12:22.642 { 00:12:22.642 "name": "BaseBdev2", 00:12:22.642 "aliases": [ 00:12:22.642 "71d9abaf-c6f4-49e8-9521-a2af4f59fcbb" 00:12:22.642 ], 00:12:22.642 "product_name": "Malloc disk", 00:12:22.642 "block_size": 512, 00:12:22.642 "num_blocks": 65536, 00:12:22.642 "uuid": "71d9abaf-c6f4-49e8-9521-a2af4f59fcbb", 00:12:22.642 "assigned_rate_limits": { 00:12:22.642 "rw_ios_per_sec": 0, 00:12:22.642 "rw_mbytes_per_sec": 0, 00:12:22.642 "r_mbytes_per_sec": 0, 00:12:22.642 "w_mbytes_per_sec": 0 00:12:22.642 }, 00:12:22.642 "claimed": true, 00:12:22.642 "claim_type": "exclusive_write", 00:12:22.642 "zoned": false, 00:12:22.642 "supported_io_types": { 00:12:22.642 "read": true, 00:12:22.642 "write": true, 00:12:22.642 "unmap": true, 00:12:22.642 "flush": true, 00:12:22.642 "reset": true, 00:12:22.642 "nvme_admin": false, 00:12:22.642 "nvme_io": false, 00:12:22.642 "nvme_io_md": false, 00:12:22.642 "write_zeroes": true, 00:12:22.642 "zcopy": true, 00:12:22.642 "get_zone_info": false, 00:12:22.642 "zone_management": false, 00:12:22.642 "zone_append": false, 00:12:22.642 "compare": false, 00:12:22.642 "compare_and_write": false, 00:12:22.642 "abort": true, 00:12:22.642 "seek_hole": false, 00:12:22.642 "seek_data": false, 00:12:22.642 "copy": true, 00:12:22.642 "nvme_iov_md": false 00:12:22.642 }, 00:12:22.642 "memory_domains": [ 00:12:22.642 { 00:12:22.642 "dma_device_id": "system", 00:12:22.642 "dma_device_type": 1 00:12:22.642 }, 00:12:22.642 { 00:12:22.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.642 "dma_device_type": 2 00:12:22.642 } 00:12:22.642 ], 00:12:22.642 "driver_specific": {} 00:12:22.642 } 00:12:22.642 ] 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.642 "name": "Existed_Raid", 00:12:22.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.642 "strip_size_kb": 64, 00:12:22.642 "state": "configuring", 00:12:22.642 "raid_level": "raid5f", 00:12:22.642 "superblock": false, 00:12:22.642 "num_base_bdevs": 4, 00:12:22.642 "num_base_bdevs_discovered": 2, 00:12:22.642 "num_base_bdevs_operational": 4, 00:12:22.642 "base_bdevs_list": [ 00:12:22.642 { 00:12:22.642 "name": "BaseBdev1", 00:12:22.642 "uuid": "4f751151-cf58-4638-8f61-f9b0a7968b23", 00:12:22.642 "is_configured": true, 00:12:22.642 "data_offset": 0, 00:12:22.642 "data_size": 65536 00:12:22.642 }, 00:12:22.642 { 00:12:22.642 "name": "BaseBdev2", 00:12:22.642 "uuid": "71d9abaf-c6f4-49e8-9521-a2af4f59fcbb", 00:12:22.642 "is_configured": true, 00:12:22.642 "data_offset": 0, 00:12:22.642 "data_size": 65536 00:12:22.642 }, 00:12:22.642 { 00:12:22.642 "name": "BaseBdev3", 00:12:22.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.642 "is_configured": false, 00:12:22.642 "data_offset": 0, 00:12:22.642 "data_size": 0 00:12:22.642 }, 00:12:22.642 { 00:12:22.642 "name": "BaseBdev4", 00:12:22.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.642 "is_configured": false, 00:12:22.642 "data_offset": 0, 00:12:22.642 "data_size": 0 00:12:22.642 } 00:12:22.642 ] 00:12:22.642 }' 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.642 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.900 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:22.900 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.900 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.159 [2024-11-26 19:53:13.861982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.159 BaseBdev3 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.159 [ 00:12:23.159 { 00:12:23.159 "name": "BaseBdev3", 00:12:23.159 "aliases": [ 00:12:23.159 "137e6536-5cca-4f0e-a2d2-e7fbc1d4ab68" 00:12:23.159 ], 00:12:23.159 "product_name": "Malloc disk", 00:12:23.159 "block_size": 512, 00:12:23.159 "num_blocks": 65536, 00:12:23.159 "uuid": "137e6536-5cca-4f0e-a2d2-e7fbc1d4ab68", 00:12:23.159 "assigned_rate_limits": { 00:12:23.159 "rw_ios_per_sec": 0, 00:12:23.159 "rw_mbytes_per_sec": 0, 00:12:23.159 "r_mbytes_per_sec": 0, 00:12:23.159 "w_mbytes_per_sec": 0 00:12:23.159 }, 00:12:23.159 "claimed": true, 00:12:23.159 "claim_type": "exclusive_write", 00:12:23.159 "zoned": false, 00:12:23.159 "supported_io_types": { 00:12:23.159 "read": true, 00:12:23.159 "write": true, 00:12:23.159 "unmap": true, 00:12:23.159 "flush": true, 00:12:23.159 "reset": true, 00:12:23.159 "nvme_admin": false, 00:12:23.159 "nvme_io": false, 00:12:23.159 "nvme_io_md": false, 00:12:23.159 "write_zeroes": true, 00:12:23.159 "zcopy": true, 00:12:23.159 "get_zone_info": false, 00:12:23.159 "zone_management": false, 00:12:23.159 "zone_append": false, 00:12:23.159 "compare": false, 00:12:23.159 "compare_and_write": false, 00:12:23.159 "abort": true, 00:12:23.159 "seek_hole": false, 00:12:23.159 "seek_data": false, 00:12:23.159 "copy": true, 00:12:23.159 "nvme_iov_md": false 00:12:23.159 }, 00:12:23.159 "memory_domains": [ 00:12:23.159 { 00:12:23.159 "dma_device_id": "system", 00:12:23.159 "dma_device_type": 1 00:12:23.159 }, 00:12:23.159 { 00:12:23.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.159 "dma_device_type": 2 00:12:23.159 } 00:12:23.159 ], 00:12:23.159 "driver_specific": {} 00:12:23.159 } 00:12:23.159 ] 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.159 "name": "Existed_Raid", 00:12:23.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.159 "strip_size_kb": 64, 00:12:23.159 "state": "configuring", 00:12:23.159 "raid_level": "raid5f", 00:12:23.159 "superblock": false, 00:12:23.159 "num_base_bdevs": 4, 00:12:23.159 "num_base_bdevs_discovered": 3, 00:12:23.159 "num_base_bdevs_operational": 4, 00:12:23.159 "base_bdevs_list": [ 00:12:23.159 { 00:12:23.159 "name": "BaseBdev1", 00:12:23.159 "uuid": "4f751151-cf58-4638-8f61-f9b0a7968b23", 00:12:23.159 "is_configured": true, 00:12:23.159 "data_offset": 0, 00:12:23.159 "data_size": 65536 00:12:23.159 }, 00:12:23.159 { 00:12:23.159 "name": "BaseBdev2", 00:12:23.159 "uuid": "71d9abaf-c6f4-49e8-9521-a2af4f59fcbb", 00:12:23.159 "is_configured": true, 00:12:23.159 "data_offset": 0, 00:12:23.159 "data_size": 65536 00:12:23.159 }, 00:12:23.159 { 00:12:23.159 "name": "BaseBdev3", 00:12:23.159 "uuid": "137e6536-5cca-4f0e-a2d2-e7fbc1d4ab68", 00:12:23.159 "is_configured": true, 00:12:23.159 "data_offset": 0, 00:12:23.159 "data_size": 65536 00:12:23.159 }, 00:12:23.159 { 00:12:23.159 "name": "BaseBdev4", 00:12:23.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.159 "is_configured": false, 00:12:23.159 "data_offset": 0, 00:12:23.159 "data_size": 0 00:12:23.159 } 00:12:23.159 ] 00:12:23.159 }' 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.159 19:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.418 [2024-11-26 19:53:14.206463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:23.418 [2024-11-26 19:53:14.206514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:23.418 [2024-11-26 19:53:14.206521] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:23.418 [2024-11-26 19:53:14.206746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:23.418 [2024-11-26 19:53:14.210752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:23.418 [2024-11-26 19:53:14.210771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:23.418 [2024-11-26 19:53:14.210997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.418 BaseBdev4 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.418 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.418 [ 00:12:23.418 { 00:12:23.418 "name": "BaseBdev4", 00:12:23.418 "aliases": [ 00:12:23.418 "a0e82d15-9dab-412d-be57-ce58a4eed1ba" 00:12:23.418 ], 00:12:23.418 "product_name": "Malloc disk", 00:12:23.418 "block_size": 512, 00:12:23.418 "num_blocks": 65536, 00:12:23.418 "uuid": "a0e82d15-9dab-412d-be57-ce58a4eed1ba", 00:12:23.418 "assigned_rate_limits": { 00:12:23.418 "rw_ios_per_sec": 0, 00:12:23.418 "rw_mbytes_per_sec": 0, 00:12:23.418 "r_mbytes_per_sec": 0, 00:12:23.418 "w_mbytes_per_sec": 0 00:12:23.418 }, 00:12:23.418 "claimed": true, 00:12:23.418 "claim_type": "exclusive_write", 00:12:23.418 "zoned": false, 00:12:23.418 "supported_io_types": { 00:12:23.418 "read": true, 00:12:23.418 "write": true, 00:12:23.418 "unmap": true, 00:12:23.418 "flush": true, 00:12:23.418 "reset": true, 00:12:23.418 "nvme_admin": false, 00:12:23.418 "nvme_io": false, 00:12:23.418 "nvme_io_md": false, 00:12:23.418 "write_zeroes": true, 00:12:23.418 "zcopy": true, 00:12:23.418 "get_zone_info": false, 00:12:23.418 "zone_management": false, 00:12:23.418 "zone_append": false, 00:12:23.418 "compare": false, 00:12:23.418 "compare_and_write": false, 00:12:23.418 "abort": true, 00:12:23.418 "seek_hole": false, 00:12:23.418 "seek_data": false, 00:12:23.418 "copy": true, 00:12:23.418 "nvme_iov_md": false 00:12:23.418 }, 00:12:23.418 "memory_domains": [ 00:12:23.418 { 00:12:23.418 "dma_device_id": "system", 00:12:23.418 "dma_device_type": 1 00:12:23.418 }, 00:12:23.418 { 00:12:23.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.418 "dma_device_type": 2 00:12:23.418 } 00:12:23.418 ], 00:12:23.418 "driver_specific": {} 00:12:23.418 } 00:12:23.418 ] 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.419 "name": "Existed_Raid", 00:12:23.419 "uuid": "687ac1e3-2491-43e2-b4e6-bcdf99bcb916", 00:12:23.419 "strip_size_kb": 64, 00:12:23.419 "state": "online", 00:12:23.419 "raid_level": "raid5f", 00:12:23.419 "superblock": false, 00:12:23.419 "num_base_bdevs": 4, 00:12:23.419 "num_base_bdevs_discovered": 4, 00:12:23.419 "num_base_bdevs_operational": 4, 00:12:23.419 "base_bdevs_list": [ 00:12:23.419 { 00:12:23.419 "name": "BaseBdev1", 00:12:23.419 "uuid": "4f751151-cf58-4638-8f61-f9b0a7968b23", 00:12:23.419 "is_configured": true, 00:12:23.419 "data_offset": 0, 00:12:23.419 "data_size": 65536 00:12:23.419 }, 00:12:23.419 { 00:12:23.419 "name": "BaseBdev2", 00:12:23.419 "uuid": "71d9abaf-c6f4-49e8-9521-a2af4f59fcbb", 00:12:23.419 "is_configured": true, 00:12:23.419 "data_offset": 0, 00:12:23.419 "data_size": 65536 00:12:23.419 }, 00:12:23.419 { 00:12:23.419 "name": "BaseBdev3", 00:12:23.419 "uuid": "137e6536-5cca-4f0e-a2d2-e7fbc1d4ab68", 00:12:23.419 "is_configured": true, 00:12:23.419 "data_offset": 0, 00:12:23.419 "data_size": 65536 00:12:23.419 }, 00:12:23.419 { 00:12:23.419 "name": "BaseBdev4", 00:12:23.419 "uuid": "a0e82d15-9dab-412d-be57-ce58a4eed1ba", 00:12:23.419 "is_configured": true, 00:12:23.419 "data_offset": 0, 00:12:23.419 "data_size": 65536 00:12:23.419 } 00:12:23.419 ] 00:12:23.419 }' 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.419 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.688 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:23.688 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:23.688 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:23.688 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:23.688 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:23.688 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:23.688 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:23.688 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:23.688 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.688 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.688 [2024-11-26 19:53:14.551827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.688 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.688 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:23.688 "name": "Existed_Raid", 00:12:23.688 "aliases": [ 00:12:23.688 "687ac1e3-2491-43e2-b4e6-bcdf99bcb916" 00:12:23.688 ], 00:12:23.688 "product_name": "Raid Volume", 00:12:23.688 "block_size": 512, 00:12:23.688 "num_blocks": 196608, 00:12:23.688 "uuid": "687ac1e3-2491-43e2-b4e6-bcdf99bcb916", 00:12:23.689 "assigned_rate_limits": { 00:12:23.689 "rw_ios_per_sec": 0, 00:12:23.689 "rw_mbytes_per_sec": 0, 00:12:23.689 "r_mbytes_per_sec": 0, 00:12:23.689 "w_mbytes_per_sec": 0 00:12:23.689 }, 00:12:23.689 "claimed": false, 00:12:23.689 "zoned": false, 00:12:23.689 "supported_io_types": { 00:12:23.689 "read": true, 00:12:23.689 "write": true, 00:12:23.689 "unmap": false, 00:12:23.689 "flush": false, 00:12:23.689 "reset": true, 00:12:23.689 "nvme_admin": false, 00:12:23.689 "nvme_io": false, 00:12:23.689 "nvme_io_md": false, 00:12:23.689 "write_zeroes": true, 00:12:23.689 "zcopy": false, 00:12:23.689 "get_zone_info": false, 00:12:23.689 "zone_management": false, 00:12:23.689 "zone_append": false, 00:12:23.689 "compare": false, 00:12:23.689 "compare_and_write": false, 00:12:23.689 "abort": false, 00:12:23.689 "seek_hole": false, 00:12:23.689 "seek_data": false, 00:12:23.689 "copy": false, 00:12:23.689 "nvme_iov_md": false 00:12:23.689 }, 00:12:23.689 "driver_specific": { 00:12:23.689 "raid": { 00:12:23.689 "uuid": "687ac1e3-2491-43e2-b4e6-bcdf99bcb916", 00:12:23.689 "strip_size_kb": 64, 00:12:23.689 "state": "online", 00:12:23.689 "raid_level": "raid5f", 00:12:23.689 "superblock": false, 00:12:23.689 "num_base_bdevs": 4, 00:12:23.689 "num_base_bdevs_discovered": 4, 00:12:23.689 "num_base_bdevs_operational": 4, 00:12:23.689 "base_bdevs_list": [ 00:12:23.689 { 00:12:23.689 "name": "BaseBdev1", 00:12:23.689 "uuid": "4f751151-cf58-4638-8f61-f9b0a7968b23", 00:12:23.689 "is_configured": true, 00:12:23.689 "data_offset": 0, 00:12:23.689 "data_size": 65536 00:12:23.689 }, 00:12:23.689 { 00:12:23.689 "name": "BaseBdev2", 00:12:23.689 "uuid": "71d9abaf-c6f4-49e8-9521-a2af4f59fcbb", 00:12:23.689 "is_configured": true, 00:12:23.689 "data_offset": 0, 00:12:23.689 "data_size": 65536 00:12:23.689 }, 00:12:23.689 { 00:12:23.689 "name": "BaseBdev3", 00:12:23.689 "uuid": "137e6536-5cca-4f0e-a2d2-e7fbc1d4ab68", 00:12:23.689 "is_configured": true, 00:12:23.689 "data_offset": 0, 00:12:23.689 "data_size": 65536 00:12:23.689 }, 00:12:23.689 { 00:12:23.689 "name": "BaseBdev4", 00:12:23.689 "uuid": "a0e82d15-9dab-412d-be57-ce58a4eed1ba", 00:12:23.689 "is_configured": true, 00:12:23.689 "data_offset": 0, 00:12:23.689 "data_size": 65536 00:12:23.689 } 00:12:23.689 ] 00:12:23.689 } 00:12:23.689 } 00:12:23.689 }' 00:12:23.689 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:23.689 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:23.689 BaseBdev2 00:12:23.689 BaseBdev3 00:12:23.689 BaseBdev4' 00:12:23.689 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.947 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.948 [2024-11-26 19:53:14.747681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.948 "name": "Existed_Raid", 00:12:23.948 "uuid": "687ac1e3-2491-43e2-b4e6-bcdf99bcb916", 00:12:23.948 "strip_size_kb": 64, 00:12:23.948 "state": "online", 00:12:23.948 "raid_level": "raid5f", 00:12:23.948 "superblock": false, 00:12:23.948 "num_base_bdevs": 4, 00:12:23.948 "num_base_bdevs_discovered": 3, 00:12:23.948 "num_base_bdevs_operational": 3, 00:12:23.948 "base_bdevs_list": [ 00:12:23.948 { 00:12:23.948 "name": null, 00:12:23.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.948 "is_configured": false, 00:12:23.948 "data_offset": 0, 00:12:23.948 "data_size": 65536 00:12:23.948 }, 00:12:23.948 { 00:12:23.948 "name": "BaseBdev2", 00:12:23.948 "uuid": "71d9abaf-c6f4-49e8-9521-a2af4f59fcbb", 00:12:23.948 "is_configured": true, 00:12:23.948 "data_offset": 0, 00:12:23.948 "data_size": 65536 00:12:23.948 }, 00:12:23.948 { 00:12:23.948 "name": "BaseBdev3", 00:12:23.948 "uuid": "137e6536-5cca-4f0e-a2d2-e7fbc1d4ab68", 00:12:23.948 "is_configured": true, 00:12:23.948 "data_offset": 0, 00:12:23.948 "data_size": 65536 00:12:23.948 }, 00:12:23.948 { 00:12:23.948 "name": "BaseBdev4", 00:12:23.948 "uuid": "a0e82d15-9dab-412d-be57-ce58a4eed1ba", 00:12:23.948 "is_configured": true, 00:12:23.948 "data_offset": 0, 00:12:23.948 "data_size": 65536 00:12:23.948 } 00:12:23.948 ] 00:12:23.948 }' 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.948 19:53:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.206 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:24.206 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:24.206 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.206 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:24.206 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.206 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.206 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.206 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:24.206 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:24.206 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:24.206 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.206 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.206 [2024-11-26 19:53:15.128599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:24.206 [2024-11-26 19:53:15.128797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.464 [2024-11-26 19:53:15.178049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.464 [2024-11-26 19:53:15.214062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.464 [2024-11-26 19:53:15.299365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:24.464 [2024-11-26 19:53:15.299483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.464 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.723 BaseBdev2 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.723 [ 00:12:24.723 { 00:12:24.723 "name": "BaseBdev2", 00:12:24.723 "aliases": [ 00:12:24.723 "03a03c9f-0ea9-4603-86ba-4daa5febc4a8" 00:12:24.723 ], 00:12:24.723 "product_name": "Malloc disk", 00:12:24.723 "block_size": 512, 00:12:24.723 "num_blocks": 65536, 00:12:24.723 "uuid": "03a03c9f-0ea9-4603-86ba-4daa5febc4a8", 00:12:24.723 "assigned_rate_limits": { 00:12:24.723 "rw_ios_per_sec": 0, 00:12:24.723 "rw_mbytes_per_sec": 0, 00:12:24.723 "r_mbytes_per_sec": 0, 00:12:24.723 "w_mbytes_per_sec": 0 00:12:24.723 }, 00:12:24.723 "claimed": false, 00:12:24.723 "zoned": false, 00:12:24.723 "supported_io_types": { 00:12:24.723 "read": true, 00:12:24.723 "write": true, 00:12:24.723 "unmap": true, 00:12:24.723 "flush": true, 00:12:24.723 "reset": true, 00:12:24.723 "nvme_admin": false, 00:12:24.723 "nvme_io": false, 00:12:24.723 "nvme_io_md": false, 00:12:24.723 "write_zeroes": true, 00:12:24.723 "zcopy": true, 00:12:24.723 "get_zone_info": false, 00:12:24.723 "zone_management": false, 00:12:24.723 "zone_append": false, 00:12:24.723 "compare": false, 00:12:24.723 "compare_and_write": false, 00:12:24.723 "abort": true, 00:12:24.723 "seek_hole": false, 00:12:24.723 "seek_data": false, 00:12:24.723 "copy": true, 00:12:24.723 "nvme_iov_md": false 00:12:24.723 }, 00:12:24.723 "memory_domains": [ 00:12:24.723 { 00:12:24.723 "dma_device_id": "system", 00:12:24.723 "dma_device_type": 1 00:12:24.723 }, 00:12:24.723 { 00:12:24.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.723 "dma_device_type": 2 00:12:24.723 } 00:12:24.723 ], 00:12:24.723 "driver_specific": {} 00:12:24.723 } 00:12:24.723 ] 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:24.723 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.724 BaseBdev3 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.724 [ 00:12:24.724 { 00:12:24.724 "name": "BaseBdev3", 00:12:24.724 "aliases": [ 00:12:24.724 "1750e81b-46c1-4166-a76b-14cdeba9d9e3" 00:12:24.724 ], 00:12:24.724 "product_name": "Malloc disk", 00:12:24.724 "block_size": 512, 00:12:24.724 "num_blocks": 65536, 00:12:24.724 "uuid": "1750e81b-46c1-4166-a76b-14cdeba9d9e3", 00:12:24.724 "assigned_rate_limits": { 00:12:24.724 "rw_ios_per_sec": 0, 00:12:24.724 "rw_mbytes_per_sec": 0, 00:12:24.724 "r_mbytes_per_sec": 0, 00:12:24.724 "w_mbytes_per_sec": 0 00:12:24.724 }, 00:12:24.724 "claimed": false, 00:12:24.724 "zoned": false, 00:12:24.724 "supported_io_types": { 00:12:24.724 "read": true, 00:12:24.724 "write": true, 00:12:24.724 "unmap": true, 00:12:24.724 "flush": true, 00:12:24.724 "reset": true, 00:12:24.724 "nvme_admin": false, 00:12:24.724 "nvme_io": false, 00:12:24.724 "nvme_io_md": false, 00:12:24.724 "write_zeroes": true, 00:12:24.724 "zcopy": true, 00:12:24.724 "get_zone_info": false, 00:12:24.724 "zone_management": false, 00:12:24.724 "zone_append": false, 00:12:24.724 "compare": false, 00:12:24.724 "compare_and_write": false, 00:12:24.724 "abort": true, 00:12:24.724 "seek_hole": false, 00:12:24.724 "seek_data": false, 00:12:24.724 "copy": true, 00:12:24.724 "nvme_iov_md": false 00:12:24.724 }, 00:12:24.724 "memory_domains": [ 00:12:24.724 { 00:12:24.724 "dma_device_id": "system", 00:12:24.724 "dma_device_type": 1 00:12:24.724 }, 00:12:24.724 { 00:12:24.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.724 "dma_device_type": 2 00:12:24.724 } 00:12:24.724 ], 00:12:24.724 "driver_specific": {} 00:12:24.724 } 00:12:24.724 ] 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.724 BaseBdev4 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.724 [ 00:12:24.724 { 00:12:24.724 "name": "BaseBdev4", 00:12:24.724 "aliases": [ 00:12:24.724 "69e8c8b1-3724-4fb7-ba27-ade7f313aa39" 00:12:24.724 ], 00:12:24.724 "product_name": "Malloc disk", 00:12:24.724 "block_size": 512, 00:12:24.724 "num_blocks": 65536, 00:12:24.724 "uuid": "69e8c8b1-3724-4fb7-ba27-ade7f313aa39", 00:12:24.724 "assigned_rate_limits": { 00:12:24.724 "rw_ios_per_sec": 0, 00:12:24.724 "rw_mbytes_per_sec": 0, 00:12:24.724 "r_mbytes_per_sec": 0, 00:12:24.724 "w_mbytes_per_sec": 0 00:12:24.724 }, 00:12:24.724 "claimed": false, 00:12:24.724 "zoned": false, 00:12:24.724 "supported_io_types": { 00:12:24.724 "read": true, 00:12:24.724 "write": true, 00:12:24.724 "unmap": true, 00:12:24.724 "flush": true, 00:12:24.724 "reset": true, 00:12:24.724 "nvme_admin": false, 00:12:24.724 "nvme_io": false, 00:12:24.724 "nvme_io_md": false, 00:12:24.724 "write_zeroes": true, 00:12:24.724 "zcopy": true, 00:12:24.724 "get_zone_info": false, 00:12:24.724 "zone_management": false, 00:12:24.724 "zone_append": false, 00:12:24.724 "compare": false, 00:12:24.724 "compare_and_write": false, 00:12:24.724 "abort": true, 00:12:24.724 "seek_hole": false, 00:12:24.724 "seek_data": false, 00:12:24.724 "copy": true, 00:12:24.724 "nvme_iov_md": false 00:12:24.724 }, 00:12:24.724 "memory_domains": [ 00:12:24.724 { 00:12:24.724 "dma_device_id": "system", 00:12:24.724 "dma_device_type": 1 00:12:24.724 }, 00:12:24.724 { 00:12:24.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.724 "dma_device_type": 2 00:12:24.724 } 00:12:24.724 ], 00:12:24.724 "driver_specific": {} 00:12:24.724 } 00:12:24.724 ] 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.724 [2024-11-26 19:53:15.540565] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:24.724 [2024-11-26 19:53:15.540688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:24.724 [2024-11-26 19:53:15.540754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.724 [2024-11-26 19:53:15.542364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.724 [2024-11-26 19:53:15.542473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:24.724 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.725 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.725 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.725 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.725 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.725 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.725 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.725 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.725 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.725 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.725 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.725 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.725 "name": "Existed_Raid", 00:12:24.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.725 "strip_size_kb": 64, 00:12:24.725 "state": "configuring", 00:12:24.725 "raid_level": "raid5f", 00:12:24.725 "superblock": false, 00:12:24.725 "num_base_bdevs": 4, 00:12:24.725 "num_base_bdevs_discovered": 3, 00:12:24.725 "num_base_bdevs_operational": 4, 00:12:24.725 "base_bdevs_list": [ 00:12:24.725 { 00:12:24.725 "name": "BaseBdev1", 00:12:24.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.725 "is_configured": false, 00:12:24.725 "data_offset": 0, 00:12:24.725 "data_size": 0 00:12:24.725 }, 00:12:24.725 { 00:12:24.725 "name": "BaseBdev2", 00:12:24.725 "uuid": "03a03c9f-0ea9-4603-86ba-4daa5febc4a8", 00:12:24.725 "is_configured": true, 00:12:24.725 "data_offset": 0, 00:12:24.725 "data_size": 65536 00:12:24.725 }, 00:12:24.725 { 00:12:24.725 "name": "BaseBdev3", 00:12:24.725 "uuid": "1750e81b-46c1-4166-a76b-14cdeba9d9e3", 00:12:24.725 "is_configured": true, 00:12:24.725 "data_offset": 0, 00:12:24.725 "data_size": 65536 00:12:24.725 }, 00:12:24.725 { 00:12:24.725 "name": "BaseBdev4", 00:12:24.725 "uuid": "69e8c8b1-3724-4fb7-ba27-ade7f313aa39", 00:12:24.725 "is_configured": true, 00:12:24.725 "data_offset": 0, 00:12:24.725 "data_size": 65536 00:12:24.725 } 00:12:24.725 ] 00:12:24.725 }' 00:12:24.725 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.725 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.983 [2024-11-26 19:53:15.872644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.983 "name": "Existed_Raid", 00:12:24.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.983 "strip_size_kb": 64, 00:12:24.983 "state": "configuring", 00:12:24.983 "raid_level": "raid5f", 00:12:24.983 "superblock": false, 00:12:24.983 "num_base_bdevs": 4, 00:12:24.983 "num_base_bdevs_discovered": 2, 00:12:24.983 "num_base_bdevs_operational": 4, 00:12:24.983 "base_bdevs_list": [ 00:12:24.983 { 00:12:24.983 "name": "BaseBdev1", 00:12:24.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.983 "is_configured": false, 00:12:24.983 "data_offset": 0, 00:12:24.983 "data_size": 0 00:12:24.983 }, 00:12:24.983 { 00:12:24.983 "name": null, 00:12:24.983 "uuid": "03a03c9f-0ea9-4603-86ba-4daa5febc4a8", 00:12:24.983 "is_configured": false, 00:12:24.983 "data_offset": 0, 00:12:24.983 "data_size": 65536 00:12:24.983 }, 00:12:24.983 { 00:12:24.983 "name": "BaseBdev3", 00:12:24.983 "uuid": "1750e81b-46c1-4166-a76b-14cdeba9d9e3", 00:12:24.983 "is_configured": true, 00:12:24.983 "data_offset": 0, 00:12:24.983 "data_size": 65536 00:12:24.983 }, 00:12:24.983 { 00:12:24.983 "name": "BaseBdev4", 00:12:24.983 "uuid": "69e8c8b1-3724-4fb7-ba27-ade7f313aa39", 00:12:24.983 "is_configured": true, 00:12:24.983 "data_offset": 0, 00:12:24.983 "data_size": 65536 00:12:24.983 } 00:12:24.983 ] 00:12:24.983 }' 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.983 19:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.242 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.242 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.242 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.242 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.500 [2024-11-26 19:53:16.216717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.500 BaseBdev1 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.500 [ 00:12:25.500 { 00:12:25.500 "name": "BaseBdev1", 00:12:25.500 "aliases": [ 00:12:25.500 "c4dcdba9-672d-4daf-8ea8-be315c2dd65f" 00:12:25.500 ], 00:12:25.500 "product_name": "Malloc disk", 00:12:25.500 "block_size": 512, 00:12:25.500 "num_blocks": 65536, 00:12:25.500 "uuid": "c4dcdba9-672d-4daf-8ea8-be315c2dd65f", 00:12:25.500 "assigned_rate_limits": { 00:12:25.500 "rw_ios_per_sec": 0, 00:12:25.500 "rw_mbytes_per_sec": 0, 00:12:25.500 "r_mbytes_per_sec": 0, 00:12:25.500 "w_mbytes_per_sec": 0 00:12:25.500 }, 00:12:25.500 "claimed": true, 00:12:25.500 "claim_type": "exclusive_write", 00:12:25.500 "zoned": false, 00:12:25.500 "supported_io_types": { 00:12:25.500 "read": true, 00:12:25.500 "write": true, 00:12:25.500 "unmap": true, 00:12:25.500 "flush": true, 00:12:25.500 "reset": true, 00:12:25.500 "nvme_admin": false, 00:12:25.500 "nvme_io": false, 00:12:25.500 "nvme_io_md": false, 00:12:25.500 "write_zeroes": true, 00:12:25.500 "zcopy": true, 00:12:25.500 "get_zone_info": false, 00:12:25.500 "zone_management": false, 00:12:25.500 "zone_append": false, 00:12:25.500 "compare": false, 00:12:25.500 "compare_and_write": false, 00:12:25.500 "abort": true, 00:12:25.500 "seek_hole": false, 00:12:25.500 "seek_data": false, 00:12:25.500 "copy": true, 00:12:25.500 "nvme_iov_md": false 00:12:25.500 }, 00:12:25.500 "memory_domains": [ 00:12:25.500 { 00:12:25.500 "dma_device_id": "system", 00:12:25.500 "dma_device_type": 1 00:12:25.500 }, 00:12:25.500 { 00:12:25.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.500 "dma_device_type": 2 00:12:25.500 } 00:12:25.500 ], 00:12:25.500 "driver_specific": {} 00:12:25.500 } 00:12:25.500 ] 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.500 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.501 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.501 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.501 "name": "Existed_Raid", 00:12:25.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.501 "strip_size_kb": 64, 00:12:25.501 "state": "configuring", 00:12:25.501 "raid_level": "raid5f", 00:12:25.501 "superblock": false, 00:12:25.501 "num_base_bdevs": 4, 00:12:25.501 "num_base_bdevs_discovered": 3, 00:12:25.501 "num_base_bdevs_operational": 4, 00:12:25.501 "base_bdevs_list": [ 00:12:25.501 { 00:12:25.501 "name": "BaseBdev1", 00:12:25.501 "uuid": "c4dcdba9-672d-4daf-8ea8-be315c2dd65f", 00:12:25.501 "is_configured": true, 00:12:25.501 "data_offset": 0, 00:12:25.501 "data_size": 65536 00:12:25.501 }, 00:12:25.501 { 00:12:25.501 "name": null, 00:12:25.501 "uuid": "03a03c9f-0ea9-4603-86ba-4daa5febc4a8", 00:12:25.501 "is_configured": false, 00:12:25.501 "data_offset": 0, 00:12:25.501 "data_size": 65536 00:12:25.501 }, 00:12:25.501 { 00:12:25.501 "name": "BaseBdev3", 00:12:25.501 "uuid": "1750e81b-46c1-4166-a76b-14cdeba9d9e3", 00:12:25.501 "is_configured": true, 00:12:25.501 "data_offset": 0, 00:12:25.501 "data_size": 65536 00:12:25.501 }, 00:12:25.501 { 00:12:25.501 "name": "BaseBdev4", 00:12:25.501 "uuid": "69e8c8b1-3724-4fb7-ba27-ade7f313aa39", 00:12:25.501 "is_configured": true, 00:12:25.501 "data_offset": 0, 00:12:25.501 "data_size": 65536 00:12:25.501 } 00:12:25.501 ] 00:12:25.501 }' 00:12:25.501 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.501 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.759 [2024-11-26 19:53:16.600864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.759 "name": "Existed_Raid", 00:12:25.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.759 "strip_size_kb": 64, 00:12:25.759 "state": "configuring", 00:12:25.759 "raid_level": "raid5f", 00:12:25.759 "superblock": false, 00:12:25.759 "num_base_bdevs": 4, 00:12:25.759 "num_base_bdevs_discovered": 2, 00:12:25.759 "num_base_bdevs_operational": 4, 00:12:25.759 "base_bdevs_list": [ 00:12:25.759 { 00:12:25.759 "name": "BaseBdev1", 00:12:25.759 "uuid": "c4dcdba9-672d-4daf-8ea8-be315c2dd65f", 00:12:25.759 "is_configured": true, 00:12:25.759 "data_offset": 0, 00:12:25.759 "data_size": 65536 00:12:25.759 }, 00:12:25.759 { 00:12:25.759 "name": null, 00:12:25.759 "uuid": "03a03c9f-0ea9-4603-86ba-4daa5febc4a8", 00:12:25.759 "is_configured": false, 00:12:25.759 "data_offset": 0, 00:12:25.759 "data_size": 65536 00:12:25.759 }, 00:12:25.759 { 00:12:25.759 "name": null, 00:12:25.759 "uuid": "1750e81b-46c1-4166-a76b-14cdeba9d9e3", 00:12:25.759 "is_configured": false, 00:12:25.759 "data_offset": 0, 00:12:25.759 "data_size": 65536 00:12:25.759 }, 00:12:25.759 { 00:12:25.759 "name": "BaseBdev4", 00:12:25.759 "uuid": "69e8c8b1-3724-4fb7-ba27-ade7f313aa39", 00:12:25.759 "is_configured": true, 00:12:25.759 "data_offset": 0, 00:12:25.759 "data_size": 65536 00:12:25.759 } 00:12:25.759 ] 00:12:25.759 }' 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.759 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.018 [2024-11-26 19:53:16.924915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.018 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.276 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.276 "name": "Existed_Raid", 00:12:26.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.276 "strip_size_kb": 64, 00:12:26.276 "state": "configuring", 00:12:26.276 "raid_level": "raid5f", 00:12:26.276 "superblock": false, 00:12:26.276 "num_base_bdevs": 4, 00:12:26.276 "num_base_bdevs_discovered": 3, 00:12:26.276 "num_base_bdevs_operational": 4, 00:12:26.276 "base_bdevs_list": [ 00:12:26.276 { 00:12:26.276 "name": "BaseBdev1", 00:12:26.276 "uuid": "c4dcdba9-672d-4daf-8ea8-be315c2dd65f", 00:12:26.276 "is_configured": true, 00:12:26.276 "data_offset": 0, 00:12:26.276 "data_size": 65536 00:12:26.276 }, 00:12:26.276 { 00:12:26.276 "name": null, 00:12:26.276 "uuid": "03a03c9f-0ea9-4603-86ba-4daa5febc4a8", 00:12:26.276 "is_configured": false, 00:12:26.276 "data_offset": 0, 00:12:26.276 "data_size": 65536 00:12:26.276 }, 00:12:26.276 { 00:12:26.276 "name": "BaseBdev3", 00:12:26.276 "uuid": "1750e81b-46c1-4166-a76b-14cdeba9d9e3", 00:12:26.276 "is_configured": true, 00:12:26.276 "data_offset": 0, 00:12:26.276 "data_size": 65536 00:12:26.276 }, 00:12:26.276 { 00:12:26.276 "name": "BaseBdev4", 00:12:26.276 "uuid": "69e8c8b1-3724-4fb7-ba27-ade7f313aa39", 00:12:26.276 "is_configured": true, 00:12:26.276 "data_offset": 0, 00:12:26.276 "data_size": 65536 00:12:26.276 } 00:12:26.276 ] 00:12:26.276 }' 00:12:26.276 19:53:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.276 19:53:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.534 [2024-11-26 19:53:17.281019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.534 "name": "Existed_Raid", 00:12:26.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.534 "strip_size_kb": 64, 00:12:26.534 "state": "configuring", 00:12:26.534 "raid_level": "raid5f", 00:12:26.534 "superblock": false, 00:12:26.534 "num_base_bdevs": 4, 00:12:26.534 "num_base_bdevs_discovered": 2, 00:12:26.534 "num_base_bdevs_operational": 4, 00:12:26.534 "base_bdevs_list": [ 00:12:26.534 { 00:12:26.534 "name": null, 00:12:26.534 "uuid": "c4dcdba9-672d-4daf-8ea8-be315c2dd65f", 00:12:26.534 "is_configured": false, 00:12:26.534 "data_offset": 0, 00:12:26.534 "data_size": 65536 00:12:26.534 }, 00:12:26.534 { 00:12:26.534 "name": null, 00:12:26.534 "uuid": "03a03c9f-0ea9-4603-86ba-4daa5febc4a8", 00:12:26.534 "is_configured": false, 00:12:26.534 "data_offset": 0, 00:12:26.534 "data_size": 65536 00:12:26.534 }, 00:12:26.534 { 00:12:26.534 "name": "BaseBdev3", 00:12:26.534 "uuid": "1750e81b-46c1-4166-a76b-14cdeba9d9e3", 00:12:26.534 "is_configured": true, 00:12:26.534 "data_offset": 0, 00:12:26.534 "data_size": 65536 00:12:26.534 }, 00:12:26.534 { 00:12:26.534 "name": "BaseBdev4", 00:12:26.534 "uuid": "69e8c8b1-3724-4fb7-ba27-ade7f313aa39", 00:12:26.534 "is_configured": true, 00:12:26.534 "data_offset": 0, 00:12:26.534 "data_size": 65536 00:12:26.534 } 00:12:26.534 ] 00:12:26.534 }' 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.534 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.792 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.792 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.792 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.792 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:26.792 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.793 [2024-11-26 19:53:17.669046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.793 "name": "Existed_Raid", 00:12:26.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.793 "strip_size_kb": 64, 00:12:26.793 "state": "configuring", 00:12:26.793 "raid_level": "raid5f", 00:12:26.793 "superblock": false, 00:12:26.793 "num_base_bdevs": 4, 00:12:26.793 "num_base_bdevs_discovered": 3, 00:12:26.793 "num_base_bdevs_operational": 4, 00:12:26.793 "base_bdevs_list": [ 00:12:26.793 { 00:12:26.793 "name": null, 00:12:26.793 "uuid": "c4dcdba9-672d-4daf-8ea8-be315c2dd65f", 00:12:26.793 "is_configured": false, 00:12:26.793 "data_offset": 0, 00:12:26.793 "data_size": 65536 00:12:26.793 }, 00:12:26.793 { 00:12:26.793 "name": "BaseBdev2", 00:12:26.793 "uuid": "03a03c9f-0ea9-4603-86ba-4daa5febc4a8", 00:12:26.793 "is_configured": true, 00:12:26.793 "data_offset": 0, 00:12:26.793 "data_size": 65536 00:12:26.793 }, 00:12:26.793 { 00:12:26.793 "name": "BaseBdev3", 00:12:26.793 "uuid": "1750e81b-46c1-4166-a76b-14cdeba9d9e3", 00:12:26.793 "is_configured": true, 00:12:26.793 "data_offset": 0, 00:12:26.793 "data_size": 65536 00:12:26.793 }, 00:12:26.793 { 00:12:26.793 "name": "BaseBdev4", 00:12:26.793 "uuid": "69e8c8b1-3724-4fb7-ba27-ade7f313aa39", 00:12:26.793 "is_configured": true, 00:12:26.793 "data_offset": 0, 00:12:26.793 "data_size": 65536 00:12:26.793 } 00:12:26.793 ] 00:12:26.793 }' 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.793 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.050 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:27.050 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.050 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.050 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.050 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.309 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:27.309 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:27.309 19:53:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.309 19:53:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c4dcdba9-672d-4daf-8ea8-be315c2dd65f 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.309 [2024-11-26 19:53:18.057160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:27.309 [2024-11-26 19:53:18.057200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:27.309 [2024-11-26 19:53:18.057206] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:27.309 [2024-11-26 19:53:18.057439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:27.309 [2024-11-26 19:53:18.061334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:27.309 [2024-11-26 19:53:18.061362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:27.309 [2024-11-26 19:53:18.061561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.309 NewBaseBdev 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.309 [ 00:12:27.309 { 00:12:27.309 "name": "NewBaseBdev", 00:12:27.309 "aliases": [ 00:12:27.309 "c4dcdba9-672d-4daf-8ea8-be315c2dd65f" 00:12:27.309 ], 00:12:27.309 "product_name": "Malloc disk", 00:12:27.309 "block_size": 512, 00:12:27.309 "num_blocks": 65536, 00:12:27.309 "uuid": "c4dcdba9-672d-4daf-8ea8-be315c2dd65f", 00:12:27.309 "assigned_rate_limits": { 00:12:27.309 "rw_ios_per_sec": 0, 00:12:27.309 "rw_mbytes_per_sec": 0, 00:12:27.309 "r_mbytes_per_sec": 0, 00:12:27.309 "w_mbytes_per_sec": 0 00:12:27.309 }, 00:12:27.309 "claimed": true, 00:12:27.309 "claim_type": "exclusive_write", 00:12:27.309 "zoned": false, 00:12:27.309 "supported_io_types": { 00:12:27.309 "read": true, 00:12:27.309 "write": true, 00:12:27.309 "unmap": true, 00:12:27.309 "flush": true, 00:12:27.309 "reset": true, 00:12:27.309 "nvme_admin": false, 00:12:27.309 "nvme_io": false, 00:12:27.309 "nvme_io_md": false, 00:12:27.309 "write_zeroes": true, 00:12:27.309 "zcopy": true, 00:12:27.309 "get_zone_info": false, 00:12:27.309 "zone_management": false, 00:12:27.309 "zone_append": false, 00:12:27.309 "compare": false, 00:12:27.309 "compare_and_write": false, 00:12:27.309 "abort": true, 00:12:27.309 "seek_hole": false, 00:12:27.309 "seek_data": false, 00:12:27.309 "copy": true, 00:12:27.309 "nvme_iov_md": false 00:12:27.309 }, 00:12:27.309 "memory_domains": [ 00:12:27.309 { 00:12:27.309 "dma_device_id": "system", 00:12:27.309 "dma_device_type": 1 00:12:27.309 }, 00:12:27.309 { 00:12:27.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.309 "dma_device_type": 2 00:12:27.309 } 00:12:27.309 ], 00:12:27.309 "driver_specific": {} 00:12:27.309 } 00:12:27.309 ] 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.309 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.310 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.310 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.310 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.310 "name": "Existed_Raid", 00:12:27.310 "uuid": "75e31732-afcb-4e89-8539-98471a2ffed3", 00:12:27.310 "strip_size_kb": 64, 00:12:27.310 "state": "online", 00:12:27.310 "raid_level": "raid5f", 00:12:27.310 "superblock": false, 00:12:27.310 "num_base_bdevs": 4, 00:12:27.310 "num_base_bdevs_discovered": 4, 00:12:27.310 "num_base_bdevs_operational": 4, 00:12:27.310 "base_bdevs_list": [ 00:12:27.310 { 00:12:27.310 "name": "NewBaseBdev", 00:12:27.310 "uuid": "c4dcdba9-672d-4daf-8ea8-be315c2dd65f", 00:12:27.310 "is_configured": true, 00:12:27.310 "data_offset": 0, 00:12:27.310 "data_size": 65536 00:12:27.310 }, 00:12:27.310 { 00:12:27.310 "name": "BaseBdev2", 00:12:27.310 "uuid": "03a03c9f-0ea9-4603-86ba-4daa5febc4a8", 00:12:27.310 "is_configured": true, 00:12:27.310 "data_offset": 0, 00:12:27.310 "data_size": 65536 00:12:27.310 }, 00:12:27.310 { 00:12:27.310 "name": "BaseBdev3", 00:12:27.310 "uuid": "1750e81b-46c1-4166-a76b-14cdeba9d9e3", 00:12:27.310 "is_configured": true, 00:12:27.310 "data_offset": 0, 00:12:27.310 "data_size": 65536 00:12:27.310 }, 00:12:27.310 { 00:12:27.310 "name": "BaseBdev4", 00:12:27.310 "uuid": "69e8c8b1-3724-4fb7-ba27-ade7f313aa39", 00:12:27.310 "is_configured": true, 00:12:27.310 "data_offset": 0, 00:12:27.310 "data_size": 65536 00:12:27.310 } 00:12:27.310 ] 00:12:27.310 }' 00:12:27.310 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.310 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.568 [2024-11-26 19:53:18.414333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:27.568 "name": "Existed_Raid", 00:12:27.568 "aliases": [ 00:12:27.568 "75e31732-afcb-4e89-8539-98471a2ffed3" 00:12:27.568 ], 00:12:27.568 "product_name": "Raid Volume", 00:12:27.568 "block_size": 512, 00:12:27.568 "num_blocks": 196608, 00:12:27.568 "uuid": "75e31732-afcb-4e89-8539-98471a2ffed3", 00:12:27.568 "assigned_rate_limits": { 00:12:27.568 "rw_ios_per_sec": 0, 00:12:27.568 "rw_mbytes_per_sec": 0, 00:12:27.568 "r_mbytes_per_sec": 0, 00:12:27.568 "w_mbytes_per_sec": 0 00:12:27.568 }, 00:12:27.568 "claimed": false, 00:12:27.568 "zoned": false, 00:12:27.568 "supported_io_types": { 00:12:27.568 "read": true, 00:12:27.568 "write": true, 00:12:27.568 "unmap": false, 00:12:27.568 "flush": false, 00:12:27.568 "reset": true, 00:12:27.568 "nvme_admin": false, 00:12:27.568 "nvme_io": false, 00:12:27.568 "nvme_io_md": false, 00:12:27.568 "write_zeroes": true, 00:12:27.568 "zcopy": false, 00:12:27.568 "get_zone_info": false, 00:12:27.568 "zone_management": false, 00:12:27.568 "zone_append": false, 00:12:27.568 "compare": false, 00:12:27.568 "compare_and_write": false, 00:12:27.568 "abort": false, 00:12:27.568 "seek_hole": false, 00:12:27.568 "seek_data": false, 00:12:27.568 "copy": false, 00:12:27.568 "nvme_iov_md": false 00:12:27.568 }, 00:12:27.568 "driver_specific": { 00:12:27.568 "raid": { 00:12:27.568 "uuid": "75e31732-afcb-4e89-8539-98471a2ffed3", 00:12:27.568 "strip_size_kb": 64, 00:12:27.568 "state": "online", 00:12:27.568 "raid_level": "raid5f", 00:12:27.568 "superblock": false, 00:12:27.568 "num_base_bdevs": 4, 00:12:27.568 "num_base_bdevs_discovered": 4, 00:12:27.568 "num_base_bdevs_operational": 4, 00:12:27.568 "base_bdevs_list": [ 00:12:27.568 { 00:12:27.568 "name": "NewBaseBdev", 00:12:27.568 "uuid": "c4dcdba9-672d-4daf-8ea8-be315c2dd65f", 00:12:27.568 "is_configured": true, 00:12:27.568 "data_offset": 0, 00:12:27.568 "data_size": 65536 00:12:27.568 }, 00:12:27.568 { 00:12:27.568 "name": "BaseBdev2", 00:12:27.568 "uuid": "03a03c9f-0ea9-4603-86ba-4daa5febc4a8", 00:12:27.568 "is_configured": true, 00:12:27.568 "data_offset": 0, 00:12:27.568 "data_size": 65536 00:12:27.568 }, 00:12:27.568 { 00:12:27.568 "name": "BaseBdev3", 00:12:27.568 "uuid": "1750e81b-46c1-4166-a76b-14cdeba9d9e3", 00:12:27.568 "is_configured": true, 00:12:27.568 "data_offset": 0, 00:12:27.568 "data_size": 65536 00:12:27.568 }, 00:12:27.568 { 00:12:27.568 "name": "BaseBdev4", 00:12:27.568 "uuid": "69e8c8b1-3724-4fb7-ba27-ade7f313aa39", 00:12:27.568 "is_configured": true, 00:12:27.568 "data_offset": 0, 00:12:27.568 "data_size": 65536 00:12:27.568 } 00:12:27.568 ] 00:12:27.568 } 00:12:27.568 } 00:12:27.568 }' 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:27.568 BaseBdev2 00:12:27.568 BaseBdev3 00:12:27.568 BaseBdev4' 00:12:27.568 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.826 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.827 [2024-11-26 19:53:18.642153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:27.827 [2024-11-26 19:53:18.642260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.827 [2024-11-26 19:53:18.642379] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.827 [2024-11-26 19:53:18.642734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.827 [2024-11-26 19:53:18.642814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80402 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80402 ']' 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80402 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80402 00:12:27.827 killing process with pid 80402 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80402' 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80402 00:12:27.827 [2024-11-26 19:53:18.669359] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.827 19:53:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80402 00:12:28.084 [2024-11-26 19:53:18.872948] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:28.650 ************************************ 00:12:28.650 END TEST raid5f_state_function_test 00:12:28.650 ************************************ 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:28.650 00:12:28.650 real 0m7.900s 00:12:28.650 user 0m12.557s 00:12:28.650 sys 0m1.472s 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.650 19:53:19 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:12:28.650 19:53:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:28.650 19:53:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.650 19:53:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:28.650 ************************************ 00:12:28.650 START TEST raid5f_state_function_test_sb 00:12:28.650 ************************************ 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:28.650 Process raid pid: 81031 00:12:28.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81031 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81031' 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81031 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81031 ']' 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.650 19:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:28.908 [2024-11-26 19:53:19.594775] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:12:28.908 [2024-11-26 19:53:19.595181] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.908 [2024-11-26 19:53:19.757815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.165 [2024-11-26 19:53:19.873254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.165 [2024-11-26 19:53:20.020632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.165 [2024-11-26 19:53:20.020670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.731 [2024-11-26 19:53:20.440287] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:29.731 [2024-11-26 19:53:20.440355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:29.731 [2024-11-26 19:53:20.440366] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:29.731 [2024-11-26 19:53:20.440376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:29.731 [2024-11-26 19:53:20.440383] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:29.731 [2024-11-26 19:53:20.440392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:29.731 [2024-11-26 19:53:20.440398] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:29.731 [2024-11-26 19:53:20.440406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.731 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.731 "name": "Existed_Raid", 00:12:29.731 "uuid": "d7e9c5a0-0551-49fc-8728-dd3b83e4b2ae", 00:12:29.731 "strip_size_kb": 64, 00:12:29.731 "state": "configuring", 00:12:29.731 "raid_level": "raid5f", 00:12:29.731 "superblock": true, 00:12:29.731 "num_base_bdevs": 4, 00:12:29.731 "num_base_bdevs_discovered": 0, 00:12:29.731 "num_base_bdevs_operational": 4, 00:12:29.731 "base_bdevs_list": [ 00:12:29.731 { 00:12:29.731 "name": "BaseBdev1", 00:12:29.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.731 "is_configured": false, 00:12:29.731 "data_offset": 0, 00:12:29.731 "data_size": 0 00:12:29.731 }, 00:12:29.731 { 00:12:29.731 "name": "BaseBdev2", 00:12:29.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.731 "is_configured": false, 00:12:29.731 "data_offset": 0, 00:12:29.731 "data_size": 0 00:12:29.731 }, 00:12:29.731 { 00:12:29.731 "name": "BaseBdev3", 00:12:29.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.731 "is_configured": false, 00:12:29.731 "data_offset": 0, 00:12:29.731 "data_size": 0 00:12:29.731 }, 00:12:29.731 { 00:12:29.731 "name": "BaseBdev4", 00:12:29.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.732 "is_configured": false, 00:12:29.732 "data_offset": 0, 00:12:29.732 "data_size": 0 00:12:29.732 } 00:12:29.732 ] 00:12:29.732 }' 00:12:29.732 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.732 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.990 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.991 [2024-11-26 19:53:20.776286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:29.991 [2024-11-26 19:53:20.776325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.991 [2024-11-26 19:53:20.784309] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:29.991 [2024-11-26 19:53:20.784363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:29.991 [2024-11-26 19:53:20.784373] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:29.991 [2024-11-26 19:53:20.784382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:29.991 [2024-11-26 19:53:20.784389] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:29.991 [2024-11-26 19:53:20.784398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:29.991 [2024-11-26 19:53:20.784404] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:29.991 [2024-11-26 19:53:20.784412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.991 [2024-11-26 19:53:20.818584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.991 BaseBdev1 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.991 [ 00:12:29.991 { 00:12:29.991 "name": "BaseBdev1", 00:12:29.991 "aliases": [ 00:12:29.991 "c60f3806-cdaa-4d12-8d7a-7aade24bf496" 00:12:29.991 ], 00:12:29.991 "product_name": "Malloc disk", 00:12:29.991 "block_size": 512, 00:12:29.991 "num_blocks": 65536, 00:12:29.991 "uuid": "c60f3806-cdaa-4d12-8d7a-7aade24bf496", 00:12:29.991 "assigned_rate_limits": { 00:12:29.991 "rw_ios_per_sec": 0, 00:12:29.991 "rw_mbytes_per_sec": 0, 00:12:29.991 "r_mbytes_per_sec": 0, 00:12:29.991 "w_mbytes_per_sec": 0 00:12:29.991 }, 00:12:29.991 "claimed": true, 00:12:29.991 "claim_type": "exclusive_write", 00:12:29.991 "zoned": false, 00:12:29.991 "supported_io_types": { 00:12:29.991 "read": true, 00:12:29.991 "write": true, 00:12:29.991 "unmap": true, 00:12:29.991 "flush": true, 00:12:29.991 "reset": true, 00:12:29.991 "nvme_admin": false, 00:12:29.991 "nvme_io": false, 00:12:29.991 "nvme_io_md": false, 00:12:29.991 "write_zeroes": true, 00:12:29.991 "zcopy": true, 00:12:29.991 "get_zone_info": false, 00:12:29.991 "zone_management": false, 00:12:29.991 "zone_append": false, 00:12:29.991 "compare": false, 00:12:29.991 "compare_and_write": false, 00:12:29.991 "abort": true, 00:12:29.991 "seek_hole": false, 00:12:29.991 "seek_data": false, 00:12:29.991 "copy": true, 00:12:29.991 "nvme_iov_md": false 00:12:29.991 }, 00:12:29.991 "memory_domains": [ 00:12:29.991 { 00:12:29.991 "dma_device_id": "system", 00:12:29.991 "dma_device_type": 1 00:12:29.991 }, 00:12:29.991 { 00:12:29.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.991 "dma_device_type": 2 00:12:29.991 } 00:12:29.991 ], 00:12:29.991 "driver_specific": {} 00:12:29.991 } 00:12:29.991 ] 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.991 "name": "Existed_Raid", 00:12:29.991 "uuid": "7c5f9910-d740-48b7-93c8-7623d04f3fb4", 00:12:29.991 "strip_size_kb": 64, 00:12:29.991 "state": "configuring", 00:12:29.991 "raid_level": "raid5f", 00:12:29.991 "superblock": true, 00:12:29.991 "num_base_bdevs": 4, 00:12:29.991 "num_base_bdevs_discovered": 1, 00:12:29.991 "num_base_bdevs_operational": 4, 00:12:29.991 "base_bdevs_list": [ 00:12:29.991 { 00:12:29.991 "name": "BaseBdev1", 00:12:29.991 "uuid": "c60f3806-cdaa-4d12-8d7a-7aade24bf496", 00:12:29.991 "is_configured": true, 00:12:29.991 "data_offset": 2048, 00:12:29.991 "data_size": 63488 00:12:29.991 }, 00:12:29.991 { 00:12:29.991 "name": "BaseBdev2", 00:12:29.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.991 "is_configured": false, 00:12:29.991 "data_offset": 0, 00:12:29.991 "data_size": 0 00:12:29.991 }, 00:12:29.991 { 00:12:29.991 "name": "BaseBdev3", 00:12:29.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.991 "is_configured": false, 00:12:29.991 "data_offset": 0, 00:12:29.991 "data_size": 0 00:12:29.991 }, 00:12:29.991 { 00:12:29.991 "name": "BaseBdev4", 00:12:29.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.991 "is_configured": false, 00:12:29.991 "data_offset": 0, 00:12:29.991 "data_size": 0 00:12:29.991 } 00:12:29.991 ] 00:12:29.991 }' 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.991 19:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.250 [2024-11-26 19:53:21.162725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:30.250 [2024-11-26 19:53:21.162780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.250 [2024-11-26 19:53:21.170787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.250 [2024-11-26 19:53:21.172861] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:30.250 [2024-11-26 19:53:21.172981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:30.250 [2024-11-26 19:53:21.173043] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:30.250 [2024-11-26 19:53:21.173073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:30.250 [2024-11-26 19:53:21.173130] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:30.250 [2024-11-26 19:53:21.173156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.250 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.508 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.508 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.508 "name": "Existed_Raid", 00:12:30.508 "uuid": "2dbcd8bc-c89e-459d-82a5-f120e62c5cdb", 00:12:30.508 "strip_size_kb": 64, 00:12:30.508 "state": "configuring", 00:12:30.508 "raid_level": "raid5f", 00:12:30.508 "superblock": true, 00:12:30.508 "num_base_bdevs": 4, 00:12:30.508 "num_base_bdevs_discovered": 1, 00:12:30.508 "num_base_bdevs_operational": 4, 00:12:30.508 "base_bdevs_list": [ 00:12:30.508 { 00:12:30.508 "name": "BaseBdev1", 00:12:30.508 "uuid": "c60f3806-cdaa-4d12-8d7a-7aade24bf496", 00:12:30.508 "is_configured": true, 00:12:30.508 "data_offset": 2048, 00:12:30.508 "data_size": 63488 00:12:30.508 }, 00:12:30.508 { 00:12:30.508 "name": "BaseBdev2", 00:12:30.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.508 "is_configured": false, 00:12:30.508 "data_offset": 0, 00:12:30.508 "data_size": 0 00:12:30.508 }, 00:12:30.508 { 00:12:30.508 "name": "BaseBdev3", 00:12:30.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.508 "is_configured": false, 00:12:30.508 "data_offset": 0, 00:12:30.508 "data_size": 0 00:12:30.508 }, 00:12:30.508 { 00:12:30.508 "name": "BaseBdev4", 00:12:30.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.508 "is_configured": false, 00:12:30.508 "data_offset": 0, 00:12:30.508 "data_size": 0 00:12:30.508 } 00:12:30.508 ] 00:12:30.508 }' 00:12:30.508 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.508 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.767 [2024-11-26 19:53:21.507221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.767 BaseBdev2 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.767 [ 00:12:30.767 { 00:12:30.767 "name": "BaseBdev2", 00:12:30.767 "aliases": [ 00:12:30.767 "6c15c57c-e41e-4484-b696-9f1a06d468bb" 00:12:30.767 ], 00:12:30.767 "product_name": "Malloc disk", 00:12:30.767 "block_size": 512, 00:12:30.767 "num_blocks": 65536, 00:12:30.767 "uuid": "6c15c57c-e41e-4484-b696-9f1a06d468bb", 00:12:30.767 "assigned_rate_limits": { 00:12:30.767 "rw_ios_per_sec": 0, 00:12:30.767 "rw_mbytes_per_sec": 0, 00:12:30.767 "r_mbytes_per_sec": 0, 00:12:30.767 "w_mbytes_per_sec": 0 00:12:30.767 }, 00:12:30.767 "claimed": true, 00:12:30.767 "claim_type": "exclusive_write", 00:12:30.767 "zoned": false, 00:12:30.767 "supported_io_types": { 00:12:30.767 "read": true, 00:12:30.767 "write": true, 00:12:30.767 "unmap": true, 00:12:30.767 "flush": true, 00:12:30.767 "reset": true, 00:12:30.767 "nvme_admin": false, 00:12:30.767 "nvme_io": false, 00:12:30.767 "nvme_io_md": false, 00:12:30.767 "write_zeroes": true, 00:12:30.767 "zcopy": true, 00:12:30.767 "get_zone_info": false, 00:12:30.767 "zone_management": false, 00:12:30.767 "zone_append": false, 00:12:30.767 "compare": false, 00:12:30.767 "compare_and_write": false, 00:12:30.767 "abort": true, 00:12:30.767 "seek_hole": false, 00:12:30.767 "seek_data": false, 00:12:30.767 "copy": true, 00:12:30.767 "nvme_iov_md": false 00:12:30.767 }, 00:12:30.767 "memory_domains": [ 00:12:30.767 { 00:12:30.767 "dma_device_id": "system", 00:12:30.767 "dma_device_type": 1 00:12:30.767 }, 00:12:30.767 { 00:12:30.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.767 "dma_device_type": 2 00:12:30.767 } 00:12:30.767 ], 00:12:30.767 "driver_specific": {} 00:12:30.767 } 00:12:30.767 ] 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:30.767 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.768 "name": "Existed_Raid", 00:12:30.768 "uuid": "2dbcd8bc-c89e-459d-82a5-f120e62c5cdb", 00:12:30.768 "strip_size_kb": 64, 00:12:30.768 "state": "configuring", 00:12:30.768 "raid_level": "raid5f", 00:12:30.768 "superblock": true, 00:12:30.768 "num_base_bdevs": 4, 00:12:30.768 "num_base_bdevs_discovered": 2, 00:12:30.768 "num_base_bdevs_operational": 4, 00:12:30.768 "base_bdevs_list": [ 00:12:30.768 { 00:12:30.768 "name": "BaseBdev1", 00:12:30.768 "uuid": "c60f3806-cdaa-4d12-8d7a-7aade24bf496", 00:12:30.768 "is_configured": true, 00:12:30.768 "data_offset": 2048, 00:12:30.768 "data_size": 63488 00:12:30.768 }, 00:12:30.768 { 00:12:30.768 "name": "BaseBdev2", 00:12:30.768 "uuid": "6c15c57c-e41e-4484-b696-9f1a06d468bb", 00:12:30.768 "is_configured": true, 00:12:30.768 "data_offset": 2048, 00:12:30.768 "data_size": 63488 00:12:30.768 }, 00:12:30.768 { 00:12:30.768 "name": "BaseBdev3", 00:12:30.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.768 "is_configured": false, 00:12:30.768 "data_offset": 0, 00:12:30.768 "data_size": 0 00:12:30.768 }, 00:12:30.768 { 00:12:30.768 "name": "BaseBdev4", 00:12:30.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.768 "is_configured": false, 00:12:30.768 "data_offset": 0, 00:12:30.768 "data_size": 0 00:12:30.768 } 00:12:30.768 ] 00:12:30.768 }' 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.768 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.027 [2024-11-26 19:53:21.878110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:31.027 BaseBdev3 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.027 [ 00:12:31.027 { 00:12:31.027 "name": "BaseBdev3", 00:12:31.027 "aliases": [ 00:12:31.027 "ed53e156-1811-408a-bccd-c1bb657c10d4" 00:12:31.027 ], 00:12:31.027 "product_name": "Malloc disk", 00:12:31.027 "block_size": 512, 00:12:31.027 "num_blocks": 65536, 00:12:31.027 "uuid": "ed53e156-1811-408a-bccd-c1bb657c10d4", 00:12:31.027 "assigned_rate_limits": { 00:12:31.027 "rw_ios_per_sec": 0, 00:12:31.027 "rw_mbytes_per_sec": 0, 00:12:31.027 "r_mbytes_per_sec": 0, 00:12:31.027 "w_mbytes_per_sec": 0 00:12:31.027 }, 00:12:31.027 "claimed": true, 00:12:31.027 "claim_type": "exclusive_write", 00:12:31.027 "zoned": false, 00:12:31.027 "supported_io_types": { 00:12:31.027 "read": true, 00:12:31.027 "write": true, 00:12:31.027 "unmap": true, 00:12:31.027 "flush": true, 00:12:31.027 "reset": true, 00:12:31.027 "nvme_admin": false, 00:12:31.027 "nvme_io": false, 00:12:31.027 "nvme_io_md": false, 00:12:31.027 "write_zeroes": true, 00:12:31.027 "zcopy": true, 00:12:31.027 "get_zone_info": false, 00:12:31.027 "zone_management": false, 00:12:31.027 "zone_append": false, 00:12:31.027 "compare": false, 00:12:31.027 "compare_and_write": false, 00:12:31.027 "abort": true, 00:12:31.027 "seek_hole": false, 00:12:31.027 "seek_data": false, 00:12:31.027 "copy": true, 00:12:31.027 "nvme_iov_md": false 00:12:31.027 }, 00:12:31.027 "memory_domains": [ 00:12:31.027 { 00:12:31.027 "dma_device_id": "system", 00:12:31.027 "dma_device_type": 1 00:12:31.027 }, 00:12:31.027 { 00:12:31.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.027 "dma_device_type": 2 00:12:31.027 } 00:12:31.027 ], 00:12:31.027 "driver_specific": {} 00:12:31.027 } 00:12:31.027 ] 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.027 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.028 "name": "Existed_Raid", 00:12:31.028 "uuid": "2dbcd8bc-c89e-459d-82a5-f120e62c5cdb", 00:12:31.028 "strip_size_kb": 64, 00:12:31.028 "state": "configuring", 00:12:31.028 "raid_level": "raid5f", 00:12:31.028 "superblock": true, 00:12:31.028 "num_base_bdevs": 4, 00:12:31.028 "num_base_bdevs_discovered": 3, 00:12:31.028 "num_base_bdevs_operational": 4, 00:12:31.028 "base_bdevs_list": [ 00:12:31.028 { 00:12:31.028 "name": "BaseBdev1", 00:12:31.028 "uuid": "c60f3806-cdaa-4d12-8d7a-7aade24bf496", 00:12:31.028 "is_configured": true, 00:12:31.028 "data_offset": 2048, 00:12:31.028 "data_size": 63488 00:12:31.028 }, 00:12:31.028 { 00:12:31.028 "name": "BaseBdev2", 00:12:31.028 "uuid": "6c15c57c-e41e-4484-b696-9f1a06d468bb", 00:12:31.028 "is_configured": true, 00:12:31.028 "data_offset": 2048, 00:12:31.028 "data_size": 63488 00:12:31.028 }, 00:12:31.028 { 00:12:31.028 "name": "BaseBdev3", 00:12:31.028 "uuid": "ed53e156-1811-408a-bccd-c1bb657c10d4", 00:12:31.028 "is_configured": true, 00:12:31.028 "data_offset": 2048, 00:12:31.028 "data_size": 63488 00:12:31.028 }, 00:12:31.028 { 00:12:31.028 "name": "BaseBdev4", 00:12:31.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.028 "is_configured": false, 00:12:31.028 "data_offset": 0, 00:12:31.028 "data_size": 0 00:12:31.028 } 00:12:31.028 ] 00:12:31.028 }' 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.028 19:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.286 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:31.286 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.286 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.544 [2024-11-26 19:53:22.234713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:31.544 [2024-11-26 19:53:22.234994] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:31.544 [2024-11-26 19:53:22.235009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:31.544 [2024-11-26 19:53:22.235278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:31.544 BaseBdev4 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.544 [2024-11-26 19:53:22.240310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:31.544 [2024-11-26 19:53:22.240332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:31.544 [2024-11-26 19:53:22.240581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.544 [ 00:12:31.544 { 00:12:31.544 "name": "BaseBdev4", 00:12:31.544 "aliases": [ 00:12:31.544 "f22cb1bc-8724-4420-8367-4a5236eb585e" 00:12:31.544 ], 00:12:31.544 "product_name": "Malloc disk", 00:12:31.544 "block_size": 512, 00:12:31.544 "num_blocks": 65536, 00:12:31.544 "uuid": "f22cb1bc-8724-4420-8367-4a5236eb585e", 00:12:31.544 "assigned_rate_limits": { 00:12:31.544 "rw_ios_per_sec": 0, 00:12:31.544 "rw_mbytes_per_sec": 0, 00:12:31.544 "r_mbytes_per_sec": 0, 00:12:31.544 "w_mbytes_per_sec": 0 00:12:31.544 }, 00:12:31.544 "claimed": true, 00:12:31.544 "claim_type": "exclusive_write", 00:12:31.544 "zoned": false, 00:12:31.544 "supported_io_types": { 00:12:31.544 "read": true, 00:12:31.544 "write": true, 00:12:31.544 "unmap": true, 00:12:31.544 "flush": true, 00:12:31.544 "reset": true, 00:12:31.544 "nvme_admin": false, 00:12:31.544 "nvme_io": false, 00:12:31.544 "nvme_io_md": false, 00:12:31.544 "write_zeroes": true, 00:12:31.544 "zcopy": true, 00:12:31.544 "get_zone_info": false, 00:12:31.544 "zone_management": false, 00:12:31.544 "zone_append": false, 00:12:31.544 "compare": false, 00:12:31.544 "compare_and_write": false, 00:12:31.544 "abort": true, 00:12:31.544 "seek_hole": false, 00:12:31.544 "seek_data": false, 00:12:31.544 "copy": true, 00:12:31.544 "nvme_iov_md": false 00:12:31.544 }, 00:12:31.544 "memory_domains": [ 00:12:31.544 { 00:12:31.544 "dma_device_id": "system", 00:12:31.544 "dma_device_type": 1 00:12:31.544 }, 00:12:31.544 { 00:12:31.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.544 "dma_device_type": 2 00:12:31.544 } 00:12:31.544 ], 00:12:31.544 "driver_specific": {} 00:12:31.544 } 00:12:31.544 ] 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.544 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.545 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.545 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.545 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.545 "name": "Existed_Raid", 00:12:31.545 "uuid": "2dbcd8bc-c89e-459d-82a5-f120e62c5cdb", 00:12:31.545 "strip_size_kb": 64, 00:12:31.545 "state": "online", 00:12:31.545 "raid_level": "raid5f", 00:12:31.545 "superblock": true, 00:12:31.545 "num_base_bdevs": 4, 00:12:31.545 "num_base_bdevs_discovered": 4, 00:12:31.545 "num_base_bdevs_operational": 4, 00:12:31.545 "base_bdevs_list": [ 00:12:31.545 { 00:12:31.545 "name": "BaseBdev1", 00:12:31.545 "uuid": "c60f3806-cdaa-4d12-8d7a-7aade24bf496", 00:12:31.545 "is_configured": true, 00:12:31.545 "data_offset": 2048, 00:12:31.545 "data_size": 63488 00:12:31.545 }, 00:12:31.545 { 00:12:31.545 "name": "BaseBdev2", 00:12:31.545 "uuid": "6c15c57c-e41e-4484-b696-9f1a06d468bb", 00:12:31.545 "is_configured": true, 00:12:31.545 "data_offset": 2048, 00:12:31.545 "data_size": 63488 00:12:31.545 }, 00:12:31.545 { 00:12:31.545 "name": "BaseBdev3", 00:12:31.545 "uuid": "ed53e156-1811-408a-bccd-c1bb657c10d4", 00:12:31.545 "is_configured": true, 00:12:31.545 "data_offset": 2048, 00:12:31.545 "data_size": 63488 00:12:31.545 }, 00:12:31.545 { 00:12:31.545 "name": "BaseBdev4", 00:12:31.545 "uuid": "f22cb1bc-8724-4420-8367-4a5236eb585e", 00:12:31.545 "is_configured": true, 00:12:31.545 "data_offset": 2048, 00:12:31.545 "data_size": 63488 00:12:31.545 } 00:12:31.545 ] 00:12:31.545 }' 00:12:31.545 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.545 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.803 [2024-11-26 19:53:22.610327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:31.803 "name": "Existed_Raid", 00:12:31.803 "aliases": [ 00:12:31.803 "2dbcd8bc-c89e-459d-82a5-f120e62c5cdb" 00:12:31.803 ], 00:12:31.803 "product_name": "Raid Volume", 00:12:31.803 "block_size": 512, 00:12:31.803 "num_blocks": 190464, 00:12:31.803 "uuid": "2dbcd8bc-c89e-459d-82a5-f120e62c5cdb", 00:12:31.803 "assigned_rate_limits": { 00:12:31.803 "rw_ios_per_sec": 0, 00:12:31.803 "rw_mbytes_per_sec": 0, 00:12:31.803 "r_mbytes_per_sec": 0, 00:12:31.803 "w_mbytes_per_sec": 0 00:12:31.803 }, 00:12:31.803 "claimed": false, 00:12:31.803 "zoned": false, 00:12:31.803 "supported_io_types": { 00:12:31.803 "read": true, 00:12:31.803 "write": true, 00:12:31.803 "unmap": false, 00:12:31.803 "flush": false, 00:12:31.803 "reset": true, 00:12:31.803 "nvme_admin": false, 00:12:31.803 "nvme_io": false, 00:12:31.803 "nvme_io_md": false, 00:12:31.803 "write_zeroes": true, 00:12:31.803 "zcopy": false, 00:12:31.803 "get_zone_info": false, 00:12:31.803 "zone_management": false, 00:12:31.803 "zone_append": false, 00:12:31.803 "compare": false, 00:12:31.803 "compare_and_write": false, 00:12:31.803 "abort": false, 00:12:31.803 "seek_hole": false, 00:12:31.803 "seek_data": false, 00:12:31.803 "copy": false, 00:12:31.803 "nvme_iov_md": false 00:12:31.803 }, 00:12:31.803 "driver_specific": { 00:12:31.803 "raid": { 00:12:31.803 "uuid": "2dbcd8bc-c89e-459d-82a5-f120e62c5cdb", 00:12:31.803 "strip_size_kb": 64, 00:12:31.803 "state": "online", 00:12:31.803 "raid_level": "raid5f", 00:12:31.803 "superblock": true, 00:12:31.803 "num_base_bdevs": 4, 00:12:31.803 "num_base_bdevs_discovered": 4, 00:12:31.803 "num_base_bdevs_operational": 4, 00:12:31.803 "base_bdevs_list": [ 00:12:31.803 { 00:12:31.803 "name": "BaseBdev1", 00:12:31.803 "uuid": "c60f3806-cdaa-4d12-8d7a-7aade24bf496", 00:12:31.803 "is_configured": true, 00:12:31.803 "data_offset": 2048, 00:12:31.803 "data_size": 63488 00:12:31.803 }, 00:12:31.803 { 00:12:31.803 "name": "BaseBdev2", 00:12:31.803 "uuid": "6c15c57c-e41e-4484-b696-9f1a06d468bb", 00:12:31.803 "is_configured": true, 00:12:31.803 "data_offset": 2048, 00:12:31.803 "data_size": 63488 00:12:31.803 }, 00:12:31.803 { 00:12:31.803 "name": "BaseBdev3", 00:12:31.803 "uuid": "ed53e156-1811-408a-bccd-c1bb657c10d4", 00:12:31.803 "is_configured": true, 00:12:31.803 "data_offset": 2048, 00:12:31.803 "data_size": 63488 00:12:31.803 }, 00:12:31.803 { 00:12:31.803 "name": "BaseBdev4", 00:12:31.803 "uuid": "f22cb1bc-8724-4420-8367-4a5236eb585e", 00:12:31.803 "is_configured": true, 00:12:31.803 "data_offset": 2048, 00:12:31.803 "data_size": 63488 00:12:31.803 } 00:12:31.803 ] 00:12:31.803 } 00:12:31.803 } 00:12:31.803 }' 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:31.803 BaseBdev2 00:12:31.803 BaseBdev3 00:12:31.803 BaseBdev4' 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.803 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:31.804 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.804 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.804 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.804 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.804 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.804 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.804 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:31.804 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.804 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.804 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.062 [2024-11-26 19:53:22.830177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.062 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.063 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.063 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.063 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.063 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.063 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.063 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.063 "name": "Existed_Raid", 00:12:32.063 "uuid": "2dbcd8bc-c89e-459d-82a5-f120e62c5cdb", 00:12:32.063 "strip_size_kb": 64, 00:12:32.063 "state": "online", 00:12:32.063 "raid_level": "raid5f", 00:12:32.063 "superblock": true, 00:12:32.063 "num_base_bdevs": 4, 00:12:32.063 "num_base_bdevs_discovered": 3, 00:12:32.063 "num_base_bdevs_operational": 3, 00:12:32.063 "base_bdevs_list": [ 00:12:32.063 { 00:12:32.063 "name": null, 00:12:32.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.063 "is_configured": false, 00:12:32.063 "data_offset": 0, 00:12:32.063 "data_size": 63488 00:12:32.063 }, 00:12:32.063 { 00:12:32.063 "name": "BaseBdev2", 00:12:32.063 "uuid": "6c15c57c-e41e-4484-b696-9f1a06d468bb", 00:12:32.063 "is_configured": true, 00:12:32.063 "data_offset": 2048, 00:12:32.063 "data_size": 63488 00:12:32.063 }, 00:12:32.063 { 00:12:32.063 "name": "BaseBdev3", 00:12:32.063 "uuid": "ed53e156-1811-408a-bccd-c1bb657c10d4", 00:12:32.063 "is_configured": true, 00:12:32.063 "data_offset": 2048, 00:12:32.063 "data_size": 63488 00:12:32.063 }, 00:12:32.063 { 00:12:32.063 "name": "BaseBdev4", 00:12:32.063 "uuid": "f22cb1bc-8724-4420-8367-4a5236eb585e", 00:12:32.063 "is_configured": true, 00:12:32.063 "data_offset": 2048, 00:12:32.063 "data_size": 63488 00:12:32.063 } 00:12:32.063 ] 00:12:32.063 }' 00:12:32.063 19:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.063 19:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.321 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:32.321 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.321 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.321 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.321 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.321 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:32.321 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.321 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:32.321 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:32.321 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:32.321 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.321 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.321 [2024-11-26 19:53:23.255485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:32.322 [2024-11-26 19:53:23.255652] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.580 [2024-11-26 19:53:23.317560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.580 [2024-11-26 19:53:23.361601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.580 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.580 [2024-11-26 19:53:23.462967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:32.580 [2024-11-26 19:53:23.463015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.838 BaseBdev2 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.838 [ 00:12:32.838 { 00:12:32.838 "name": "BaseBdev2", 00:12:32.838 "aliases": [ 00:12:32.838 "bd07adb0-9524-4fe8-8fa7-236fbe66844a" 00:12:32.838 ], 00:12:32.838 "product_name": "Malloc disk", 00:12:32.838 "block_size": 512, 00:12:32.838 "num_blocks": 65536, 00:12:32.838 "uuid": "bd07adb0-9524-4fe8-8fa7-236fbe66844a", 00:12:32.838 "assigned_rate_limits": { 00:12:32.838 "rw_ios_per_sec": 0, 00:12:32.838 "rw_mbytes_per_sec": 0, 00:12:32.838 "r_mbytes_per_sec": 0, 00:12:32.838 "w_mbytes_per_sec": 0 00:12:32.838 }, 00:12:32.838 "claimed": false, 00:12:32.838 "zoned": false, 00:12:32.838 "supported_io_types": { 00:12:32.838 "read": true, 00:12:32.838 "write": true, 00:12:32.838 "unmap": true, 00:12:32.838 "flush": true, 00:12:32.838 "reset": true, 00:12:32.838 "nvme_admin": false, 00:12:32.838 "nvme_io": false, 00:12:32.838 "nvme_io_md": false, 00:12:32.838 "write_zeroes": true, 00:12:32.838 "zcopy": true, 00:12:32.838 "get_zone_info": false, 00:12:32.838 "zone_management": false, 00:12:32.838 "zone_append": false, 00:12:32.838 "compare": false, 00:12:32.838 "compare_and_write": false, 00:12:32.838 "abort": true, 00:12:32.838 "seek_hole": false, 00:12:32.838 "seek_data": false, 00:12:32.838 "copy": true, 00:12:32.838 "nvme_iov_md": false 00:12:32.838 }, 00:12:32.838 "memory_domains": [ 00:12:32.838 { 00:12:32.838 "dma_device_id": "system", 00:12:32.838 "dma_device_type": 1 00:12:32.838 }, 00:12:32.838 { 00:12:32.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.838 "dma_device_type": 2 00:12:32.838 } 00:12:32.838 ], 00:12:32.838 "driver_specific": {} 00:12:32.838 } 00:12:32.838 ] 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.838 BaseBdev3 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.838 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.839 [ 00:12:32.839 { 00:12:32.839 "name": "BaseBdev3", 00:12:32.839 "aliases": [ 00:12:32.839 "6992c427-2d8f-44b9-a078-e63189a3396a" 00:12:32.839 ], 00:12:32.839 "product_name": "Malloc disk", 00:12:32.839 "block_size": 512, 00:12:32.839 "num_blocks": 65536, 00:12:32.839 "uuid": "6992c427-2d8f-44b9-a078-e63189a3396a", 00:12:32.839 "assigned_rate_limits": { 00:12:32.839 "rw_ios_per_sec": 0, 00:12:32.839 "rw_mbytes_per_sec": 0, 00:12:32.839 "r_mbytes_per_sec": 0, 00:12:32.839 "w_mbytes_per_sec": 0 00:12:32.839 }, 00:12:32.839 "claimed": false, 00:12:32.839 "zoned": false, 00:12:32.839 "supported_io_types": { 00:12:32.839 "read": true, 00:12:32.839 "write": true, 00:12:32.839 "unmap": true, 00:12:32.839 "flush": true, 00:12:32.839 "reset": true, 00:12:32.839 "nvme_admin": false, 00:12:32.839 "nvme_io": false, 00:12:32.839 "nvme_io_md": false, 00:12:32.839 "write_zeroes": true, 00:12:32.839 "zcopy": true, 00:12:32.839 "get_zone_info": false, 00:12:32.839 "zone_management": false, 00:12:32.839 "zone_append": false, 00:12:32.839 "compare": false, 00:12:32.839 "compare_and_write": false, 00:12:32.839 "abort": true, 00:12:32.839 "seek_hole": false, 00:12:32.839 "seek_data": false, 00:12:32.839 "copy": true, 00:12:32.839 "nvme_iov_md": false 00:12:32.839 }, 00:12:32.839 "memory_domains": [ 00:12:32.839 { 00:12:32.839 "dma_device_id": "system", 00:12:32.839 "dma_device_type": 1 00:12:32.839 }, 00:12:32.839 { 00:12:32.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.839 "dma_device_type": 2 00:12:32.839 } 00:12:32.839 ], 00:12:32.839 "driver_specific": {} 00:12:32.839 } 00:12:32.839 ] 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.839 BaseBdev4 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.839 [ 00:12:32.839 { 00:12:32.839 "name": "BaseBdev4", 00:12:32.839 "aliases": [ 00:12:32.839 "2b881f2b-c4e4-44c1-9a19-195454563cf1" 00:12:32.839 ], 00:12:32.839 "product_name": "Malloc disk", 00:12:32.839 "block_size": 512, 00:12:32.839 "num_blocks": 65536, 00:12:32.839 "uuid": "2b881f2b-c4e4-44c1-9a19-195454563cf1", 00:12:32.839 "assigned_rate_limits": { 00:12:32.839 "rw_ios_per_sec": 0, 00:12:32.839 "rw_mbytes_per_sec": 0, 00:12:32.839 "r_mbytes_per_sec": 0, 00:12:32.839 "w_mbytes_per_sec": 0 00:12:32.839 }, 00:12:32.839 "claimed": false, 00:12:32.839 "zoned": false, 00:12:32.839 "supported_io_types": { 00:12:32.839 "read": true, 00:12:32.839 "write": true, 00:12:32.839 "unmap": true, 00:12:32.839 "flush": true, 00:12:32.839 "reset": true, 00:12:32.839 "nvme_admin": false, 00:12:32.839 "nvme_io": false, 00:12:32.839 "nvme_io_md": false, 00:12:32.839 "write_zeroes": true, 00:12:32.839 "zcopy": true, 00:12:32.839 "get_zone_info": false, 00:12:32.839 "zone_management": false, 00:12:32.839 "zone_append": false, 00:12:32.839 "compare": false, 00:12:32.839 "compare_and_write": false, 00:12:32.839 "abort": true, 00:12:32.839 "seek_hole": false, 00:12:32.839 "seek_data": false, 00:12:32.839 "copy": true, 00:12:32.839 "nvme_iov_md": false 00:12:32.839 }, 00:12:32.839 "memory_domains": [ 00:12:32.839 { 00:12:32.839 "dma_device_id": "system", 00:12:32.839 "dma_device_type": 1 00:12:32.839 }, 00:12:32.839 { 00:12:32.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.839 "dma_device_type": 2 00:12:32.839 } 00:12:32.839 ], 00:12:32.839 "driver_specific": {} 00:12:32.839 } 00:12:32.839 ] 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.839 [2024-11-26 19:53:23.723290] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:32.839 [2024-11-26 19:53:23.723334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:32.839 [2024-11-26 19:53:23.723366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.839 [2024-11-26 19:53:23.724979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:32.839 [2024-11-26 19:53:23.725021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.839 "name": "Existed_Raid", 00:12:32.839 "uuid": "ccc35d5e-c70b-4cb8-be58-5345588dbe53", 00:12:32.839 "strip_size_kb": 64, 00:12:32.839 "state": "configuring", 00:12:32.839 "raid_level": "raid5f", 00:12:32.839 "superblock": true, 00:12:32.839 "num_base_bdevs": 4, 00:12:32.839 "num_base_bdevs_discovered": 3, 00:12:32.839 "num_base_bdevs_operational": 4, 00:12:32.839 "base_bdevs_list": [ 00:12:32.839 { 00:12:32.839 "name": "BaseBdev1", 00:12:32.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.839 "is_configured": false, 00:12:32.839 "data_offset": 0, 00:12:32.839 "data_size": 0 00:12:32.839 }, 00:12:32.839 { 00:12:32.839 "name": "BaseBdev2", 00:12:32.839 "uuid": "bd07adb0-9524-4fe8-8fa7-236fbe66844a", 00:12:32.839 "is_configured": true, 00:12:32.839 "data_offset": 2048, 00:12:32.839 "data_size": 63488 00:12:32.839 }, 00:12:32.839 { 00:12:32.839 "name": "BaseBdev3", 00:12:32.839 "uuid": "6992c427-2d8f-44b9-a078-e63189a3396a", 00:12:32.839 "is_configured": true, 00:12:32.839 "data_offset": 2048, 00:12:32.839 "data_size": 63488 00:12:32.839 }, 00:12:32.839 { 00:12:32.839 "name": "BaseBdev4", 00:12:32.839 "uuid": "2b881f2b-c4e4-44c1-9a19-195454563cf1", 00:12:32.839 "is_configured": true, 00:12:32.839 "data_offset": 2048, 00:12:32.839 "data_size": 63488 00:12:32.839 } 00:12:32.839 ] 00:12:32.839 }' 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.839 19:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.402 [2024-11-26 19:53:24.043361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.402 "name": "Existed_Raid", 00:12:33.402 "uuid": "ccc35d5e-c70b-4cb8-be58-5345588dbe53", 00:12:33.402 "strip_size_kb": 64, 00:12:33.402 "state": "configuring", 00:12:33.402 "raid_level": "raid5f", 00:12:33.402 "superblock": true, 00:12:33.402 "num_base_bdevs": 4, 00:12:33.402 "num_base_bdevs_discovered": 2, 00:12:33.402 "num_base_bdevs_operational": 4, 00:12:33.402 "base_bdevs_list": [ 00:12:33.402 { 00:12:33.402 "name": "BaseBdev1", 00:12:33.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.402 "is_configured": false, 00:12:33.402 "data_offset": 0, 00:12:33.402 "data_size": 0 00:12:33.402 }, 00:12:33.402 { 00:12:33.402 "name": null, 00:12:33.402 "uuid": "bd07adb0-9524-4fe8-8fa7-236fbe66844a", 00:12:33.402 "is_configured": false, 00:12:33.402 "data_offset": 0, 00:12:33.402 "data_size": 63488 00:12:33.402 }, 00:12:33.402 { 00:12:33.402 "name": "BaseBdev3", 00:12:33.402 "uuid": "6992c427-2d8f-44b9-a078-e63189a3396a", 00:12:33.402 "is_configured": true, 00:12:33.402 "data_offset": 2048, 00:12:33.402 "data_size": 63488 00:12:33.402 }, 00:12:33.402 { 00:12:33.402 "name": "BaseBdev4", 00:12:33.402 "uuid": "2b881f2b-c4e4-44c1-9a19-195454563cf1", 00:12:33.402 "is_configured": true, 00:12:33.402 "data_offset": 2048, 00:12:33.402 "data_size": 63488 00:12:33.402 } 00:12:33.402 ] 00:12:33.402 }' 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.402 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.659 [2024-11-26 19:53:24.415573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.659 BaseBdev1 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.659 [ 00:12:33.659 { 00:12:33.659 "name": "BaseBdev1", 00:12:33.659 "aliases": [ 00:12:33.659 "13cf5e05-67ca-4ee2-a454-bff89ef1d02b" 00:12:33.659 ], 00:12:33.659 "product_name": "Malloc disk", 00:12:33.659 "block_size": 512, 00:12:33.659 "num_blocks": 65536, 00:12:33.659 "uuid": "13cf5e05-67ca-4ee2-a454-bff89ef1d02b", 00:12:33.659 "assigned_rate_limits": { 00:12:33.659 "rw_ios_per_sec": 0, 00:12:33.659 "rw_mbytes_per_sec": 0, 00:12:33.659 "r_mbytes_per_sec": 0, 00:12:33.659 "w_mbytes_per_sec": 0 00:12:33.659 }, 00:12:33.659 "claimed": true, 00:12:33.659 "claim_type": "exclusive_write", 00:12:33.659 "zoned": false, 00:12:33.659 "supported_io_types": { 00:12:33.659 "read": true, 00:12:33.659 "write": true, 00:12:33.659 "unmap": true, 00:12:33.659 "flush": true, 00:12:33.659 "reset": true, 00:12:33.659 "nvme_admin": false, 00:12:33.659 "nvme_io": false, 00:12:33.659 "nvme_io_md": false, 00:12:33.659 "write_zeroes": true, 00:12:33.659 "zcopy": true, 00:12:33.659 "get_zone_info": false, 00:12:33.659 "zone_management": false, 00:12:33.659 "zone_append": false, 00:12:33.659 "compare": false, 00:12:33.659 "compare_and_write": false, 00:12:33.659 "abort": true, 00:12:33.659 "seek_hole": false, 00:12:33.659 "seek_data": false, 00:12:33.659 "copy": true, 00:12:33.659 "nvme_iov_md": false 00:12:33.659 }, 00:12:33.659 "memory_domains": [ 00:12:33.659 { 00:12:33.659 "dma_device_id": "system", 00:12:33.659 "dma_device_type": 1 00:12:33.659 }, 00:12:33.659 { 00:12:33.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.659 "dma_device_type": 2 00:12:33.659 } 00:12:33.659 ], 00:12:33.659 "driver_specific": {} 00:12:33.659 } 00:12:33.659 ] 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.659 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.659 "name": "Existed_Raid", 00:12:33.659 "uuid": "ccc35d5e-c70b-4cb8-be58-5345588dbe53", 00:12:33.659 "strip_size_kb": 64, 00:12:33.660 "state": "configuring", 00:12:33.660 "raid_level": "raid5f", 00:12:33.660 "superblock": true, 00:12:33.660 "num_base_bdevs": 4, 00:12:33.660 "num_base_bdevs_discovered": 3, 00:12:33.660 "num_base_bdevs_operational": 4, 00:12:33.660 "base_bdevs_list": [ 00:12:33.660 { 00:12:33.660 "name": "BaseBdev1", 00:12:33.660 "uuid": "13cf5e05-67ca-4ee2-a454-bff89ef1d02b", 00:12:33.660 "is_configured": true, 00:12:33.660 "data_offset": 2048, 00:12:33.660 "data_size": 63488 00:12:33.660 }, 00:12:33.660 { 00:12:33.660 "name": null, 00:12:33.660 "uuid": "bd07adb0-9524-4fe8-8fa7-236fbe66844a", 00:12:33.660 "is_configured": false, 00:12:33.660 "data_offset": 0, 00:12:33.660 "data_size": 63488 00:12:33.660 }, 00:12:33.660 { 00:12:33.660 "name": "BaseBdev3", 00:12:33.660 "uuid": "6992c427-2d8f-44b9-a078-e63189a3396a", 00:12:33.660 "is_configured": true, 00:12:33.660 "data_offset": 2048, 00:12:33.660 "data_size": 63488 00:12:33.660 }, 00:12:33.660 { 00:12:33.660 "name": "BaseBdev4", 00:12:33.660 "uuid": "2b881f2b-c4e4-44c1-9a19-195454563cf1", 00:12:33.660 "is_configured": true, 00:12:33.660 "data_offset": 2048, 00:12:33.660 "data_size": 63488 00:12:33.660 } 00:12:33.660 ] 00:12:33.660 }' 00:12:33.660 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.660 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.917 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:33.917 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.917 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.917 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.917 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.917 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:33.917 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:33.917 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.917 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.917 [2024-11-26 19:53:24.779697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:33.917 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.918 "name": "Existed_Raid", 00:12:33.918 "uuid": "ccc35d5e-c70b-4cb8-be58-5345588dbe53", 00:12:33.918 "strip_size_kb": 64, 00:12:33.918 "state": "configuring", 00:12:33.918 "raid_level": "raid5f", 00:12:33.918 "superblock": true, 00:12:33.918 "num_base_bdevs": 4, 00:12:33.918 "num_base_bdevs_discovered": 2, 00:12:33.918 "num_base_bdevs_operational": 4, 00:12:33.918 "base_bdevs_list": [ 00:12:33.918 { 00:12:33.918 "name": "BaseBdev1", 00:12:33.918 "uuid": "13cf5e05-67ca-4ee2-a454-bff89ef1d02b", 00:12:33.918 "is_configured": true, 00:12:33.918 "data_offset": 2048, 00:12:33.918 "data_size": 63488 00:12:33.918 }, 00:12:33.918 { 00:12:33.918 "name": null, 00:12:33.918 "uuid": "bd07adb0-9524-4fe8-8fa7-236fbe66844a", 00:12:33.918 "is_configured": false, 00:12:33.918 "data_offset": 0, 00:12:33.918 "data_size": 63488 00:12:33.918 }, 00:12:33.918 { 00:12:33.918 "name": null, 00:12:33.918 "uuid": "6992c427-2d8f-44b9-a078-e63189a3396a", 00:12:33.918 "is_configured": false, 00:12:33.918 "data_offset": 0, 00:12:33.918 "data_size": 63488 00:12:33.918 }, 00:12:33.918 { 00:12:33.918 "name": "BaseBdev4", 00:12:33.918 "uuid": "2b881f2b-c4e4-44c1-9a19-195454563cf1", 00:12:33.918 "is_configured": true, 00:12:33.918 "data_offset": 2048, 00:12:33.918 "data_size": 63488 00:12:33.918 } 00:12:33.918 ] 00:12:33.918 }' 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.918 19:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.484 [2024-11-26 19:53:25.147762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.484 "name": "Existed_Raid", 00:12:34.484 "uuid": "ccc35d5e-c70b-4cb8-be58-5345588dbe53", 00:12:34.484 "strip_size_kb": 64, 00:12:34.484 "state": "configuring", 00:12:34.484 "raid_level": "raid5f", 00:12:34.484 "superblock": true, 00:12:34.484 "num_base_bdevs": 4, 00:12:34.484 "num_base_bdevs_discovered": 3, 00:12:34.484 "num_base_bdevs_operational": 4, 00:12:34.484 "base_bdevs_list": [ 00:12:34.484 { 00:12:34.484 "name": "BaseBdev1", 00:12:34.484 "uuid": "13cf5e05-67ca-4ee2-a454-bff89ef1d02b", 00:12:34.484 "is_configured": true, 00:12:34.484 "data_offset": 2048, 00:12:34.484 "data_size": 63488 00:12:34.484 }, 00:12:34.484 { 00:12:34.484 "name": null, 00:12:34.484 "uuid": "bd07adb0-9524-4fe8-8fa7-236fbe66844a", 00:12:34.484 "is_configured": false, 00:12:34.484 "data_offset": 0, 00:12:34.484 "data_size": 63488 00:12:34.484 }, 00:12:34.484 { 00:12:34.484 "name": "BaseBdev3", 00:12:34.484 "uuid": "6992c427-2d8f-44b9-a078-e63189a3396a", 00:12:34.484 "is_configured": true, 00:12:34.484 "data_offset": 2048, 00:12:34.484 "data_size": 63488 00:12:34.484 }, 00:12:34.484 { 00:12:34.484 "name": "BaseBdev4", 00:12:34.484 "uuid": "2b881f2b-c4e4-44c1-9a19-195454563cf1", 00:12:34.484 "is_configured": true, 00:12:34.484 "data_offset": 2048, 00:12:34.484 "data_size": 63488 00:12:34.484 } 00:12:34.484 ] 00:12:34.484 }' 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.484 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.742 [2024-11-26 19:53:25.499858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.742 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.743 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.743 "name": "Existed_Raid", 00:12:34.743 "uuid": "ccc35d5e-c70b-4cb8-be58-5345588dbe53", 00:12:34.743 "strip_size_kb": 64, 00:12:34.743 "state": "configuring", 00:12:34.743 "raid_level": "raid5f", 00:12:34.743 "superblock": true, 00:12:34.743 "num_base_bdevs": 4, 00:12:34.743 "num_base_bdevs_discovered": 2, 00:12:34.743 "num_base_bdevs_operational": 4, 00:12:34.743 "base_bdevs_list": [ 00:12:34.743 { 00:12:34.743 "name": null, 00:12:34.743 "uuid": "13cf5e05-67ca-4ee2-a454-bff89ef1d02b", 00:12:34.743 "is_configured": false, 00:12:34.743 "data_offset": 0, 00:12:34.743 "data_size": 63488 00:12:34.743 }, 00:12:34.743 { 00:12:34.743 "name": null, 00:12:34.743 "uuid": "bd07adb0-9524-4fe8-8fa7-236fbe66844a", 00:12:34.743 "is_configured": false, 00:12:34.743 "data_offset": 0, 00:12:34.743 "data_size": 63488 00:12:34.743 }, 00:12:34.743 { 00:12:34.743 "name": "BaseBdev3", 00:12:34.743 "uuid": "6992c427-2d8f-44b9-a078-e63189a3396a", 00:12:34.743 "is_configured": true, 00:12:34.743 "data_offset": 2048, 00:12:34.743 "data_size": 63488 00:12:34.743 }, 00:12:34.743 { 00:12:34.743 "name": "BaseBdev4", 00:12:34.743 "uuid": "2b881f2b-c4e4-44c1-9a19-195454563cf1", 00:12:34.743 "is_configured": true, 00:12:34.743 "data_offset": 2048, 00:12:34.743 "data_size": 63488 00:12:34.743 } 00:12:34.743 ] 00:12:34.743 }' 00:12:34.743 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.743 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.001 [2024-11-26 19:53:25.880585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.001 "name": "Existed_Raid", 00:12:35.001 "uuid": "ccc35d5e-c70b-4cb8-be58-5345588dbe53", 00:12:35.001 "strip_size_kb": 64, 00:12:35.001 "state": "configuring", 00:12:35.001 "raid_level": "raid5f", 00:12:35.001 "superblock": true, 00:12:35.001 "num_base_bdevs": 4, 00:12:35.001 "num_base_bdevs_discovered": 3, 00:12:35.001 "num_base_bdevs_operational": 4, 00:12:35.001 "base_bdevs_list": [ 00:12:35.001 { 00:12:35.001 "name": null, 00:12:35.001 "uuid": "13cf5e05-67ca-4ee2-a454-bff89ef1d02b", 00:12:35.001 "is_configured": false, 00:12:35.001 "data_offset": 0, 00:12:35.001 "data_size": 63488 00:12:35.001 }, 00:12:35.001 { 00:12:35.001 "name": "BaseBdev2", 00:12:35.001 "uuid": "bd07adb0-9524-4fe8-8fa7-236fbe66844a", 00:12:35.001 "is_configured": true, 00:12:35.001 "data_offset": 2048, 00:12:35.001 "data_size": 63488 00:12:35.001 }, 00:12:35.001 { 00:12:35.001 "name": "BaseBdev3", 00:12:35.001 "uuid": "6992c427-2d8f-44b9-a078-e63189a3396a", 00:12:35.001 "is_configured": true, 00:12:35.001 "data_offset": 2048, 00:12:35.001 "data_size": 63488 00:12:35.001 }, 00:12:35.001 { 00:12:35.001 "name": "BaseBdev4", 00:12:35.001 "uuid": "2b881f2b-c4e4-44c1-9a19-195454563cf1", 00:12:35.001 "is_configured": true, 00:12:35.001 "data_offset": 2048, 00:12:35.001 "data_size": 63488 00:12:35.001 } 00:12:35.001 ] 00:12:35.001 }' 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.001 19:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 13cf5e05-67ca-4ee2-a454-bff89ef1d02b 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.566 [2024-11-26 19:53:26.284545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:35.566 [2024-11-26 19:53:26.284731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:35.566 [2024-11-26 19:53:26.284742] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:35.566 NewBaseBdev 00:12:35.566 [2024-11-26 19:53:26.284953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.566 [2024-11-26 19:53:26.288741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:35.566 [2024-11-26 19:53:26.288760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:35.566 [2024-11-26 19:53:26.288881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.566 [ 00:12:35.566 { 00:12:35.566 "name": "NewBaseBdev", 00:12:35.566 "aliases": [ 00:12:35.566 "13cf5e05-67ca-4ee2-a454-bff89ef1d02b" 00:12:35.566 ], 00:12:35.566 "product_name": "Malloc disk", 00:12:35.566 "block_size": 512, 00:12:35.566 "num_blocks": 65536, 00:12:35.566 "uuid": "13cf5e05-67ca-4ee2-a454-bff89ef1d02b", 00:12:35.566 "assigned_rate_limits": { 00:12:35.566 "rw_ios_per_sec": 0, 00:12:35.566 "rw_mbytes_per_sec": 0, 00:12:35.566 "r_mbytes_per_sec": 0, 00:12:35.566 "w_mbytes_per_sec": 0 00:12:35.566 }, 00:12:35.566 "claimed": true, 00:12:35.566 "claim_type": "exclusive_write", 00:12:35.566 "zoned": false, 00:12:35.566 "supported_io_types": { 00:12:35.566 "read": true, 00:12:35.566 "write": true, 00:12:35.566 "unmap": true, 00:12:35.566 "flush": true, 00:12:35.566 "reset": true, 00:12:35.566 "nvme_admin": false, 00:12:35.566 "nvme_io": false, 00:12:35.566 "nvme_io_md": false, 00:12:35.566 "write_zeroes": true, 00:12:35.566 "zcopy": true, 00:12:35.566 "get_zone_info": false, 00:12:35.566 "zone_management": false, 00:12:35.566 "zone_append": false, 00:12:35.566 "compare": false, 00:12:35.566 "compare_and_write": false, 00:12:35.566 "abort": true, 00:12:35.566 "seek_hole": false, 00:12:35.566 "seek_data": false, 00:12:35.566 "copy": true, 00:12:35.566 "nvme_iov_md": false 00:12:35.566 }, 00:12:35.566 "memory_domains": [ 00:12:35.566 { 00:12:35.566 "dma_device_id": "system", 00:12:35.566 "dma_device_type": 1 00:12:35.566 }, 00:12:35.566 { 00:12:35.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.566 "dma_device_type": 2 00:12:35.566 } 00:12:35.566 ], 00:12:35.566 "driver_specific": {} 00:12:35.566 } 00:12:35.566 ] 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.566 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.566 "name": "Existed_Raid", 00:12:35.566 "uuid": "ccc35d5e-c70b-4cb8-be58-5345588dbe53", 00:12:35.566 "strip_size_kb": 64, 00:12:35.566 "state": "online", 00:12:35.566 "raid_level": "raid5f", 00:12:35.566 "superblock": true, 00:12:35.566 "num_base_bdevs": 4, 00:12:35.566 "num_base_bdevs_discovered": 4, 00:12:35.566 "num_base_bdevs_operational": 4, 00:12:35.566 "base_bdevs_list": [ 00:12:35.566 { 00:12:35.566 "name": "NewBaseBdev", 00:12:35.566 "uuid": "13cf5e05-67ca-4ee2-a454-bff89ef1d02b", 00:12:35.566 "is_configured": true, 00:12:35.566 "data_offset": 2048, 00:12:35.566 "data_size": 63488 00:12:35.566 }, 00:12:35.566 { 00:12:35.567 "name": "BaseBdev2", 00:12:35.567 "uuid": "bd07adb0-9524-4fe8-8fa7-236fbe66844a", 00:12:35.567 "is_configured": true, 00:12:35.567 "data_offset": 2048, 00:12:35.567 "data_size": 63488 00:12:35.567 }, 00:12:35.567 { 00:12:35.567 "name": "BaseBdev3", 00:12:35.567 "uuid": "6992c427-2d8f-44b9-a078-e63189a3396a", 00:12:35.567 "is_configured": true, 00:12:35.567 "data_offset": 2048, 00:12:35.567 "data_size": 63488 00:12:35.567 }, 00:12:35.567 { 00:12:35.567 "name": "BaseBdev4", 00:12:35.567 "uuid": "2b881f2b-c4e4-44c1-9a19-195454563cf1", 00:12:35.567 "is_configured": true, 00:12:35.567 "data_offset": 2048, 00:12:35.567 "data_size": 63488 00:12:35.567 } 00:12:35.567 ] 00:12:35.567 }' 00:12:35.567 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.567 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.825 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:35.825 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:35.825 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:35.825 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:35.825 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:35.825 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:35.825 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:35.825 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:35.825 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.825 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.825 [2024-11-26 19:53:26.617541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:35.825 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.825 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:35.825 "name": "Existed_Raid", 00:12:35.825 "aliases": [ 00:12:35.825 "ccc35d5e-c70b-4cb8-be58-5345588dbe53" 00:12:35.825 ], 00:12:35.825 "product_name": "Raid Volume", 00:12:35.825 "block_size": 512, 00:12:35.825 "num_blocks": 190464, 00:12:35.825 "uuid": "ccc35d5e-c70b-4cb8-be58-5345588dbe53", 00:12:35.825 "assigned_rate_limits": { 00:12:35.825 "rw_ios_per_sec": 0, 00:12:35.825 "rw_mbytes_per_sec": 0, 00:12:35.825 "r_mbytes_per_sec": 0, 00:12:35.825 "w_mbytes_per_sec": 0 00:12:35.825 }, 00:12:35.825 "claimed": false, 00:12:35.825 "zoned": false, 00:12:35.825 "supported_io_types": { 00:12:35.825 "read": true, 00:12:35.825 "write": true, 00:12:35.825 "unmap": false, 00:12:35.825 "flush": false, 00:12:35.825 "reset": true, 00:12:35.825 "nvme_admin": false, 00:12:35.825 "nvme_io": false, 00:12:35.825 "nvme_io_md": false, 00:12:35.825 "write_zeroes": true, 00:12:35.825 "zcopy": false, 00:12:35.825 "get_zone_info": false, 00:12:35.825 "zone_management": false, 00:12:35.825 "zone_append": false, 00:12:35.825 "compare": false, 00:12:35.825 "compare_and_write": false, 00:12:35.825 "abort": false, 00:12:35.826 "seek_hole": false, 00:12:35.826 "seek_data": false, 00:12:35.826 "copy": false, 00:12:35.826 "nvme_iov_md": false 00:12:35.826 }, 00:12:35.826 "driver_specific": { 00:12:35.826 "raid": { 00:12:35.826 "uuid": "ccc35d5e-c70b-4cb8-be58-5345588dbe53", 00:12:35.826 "strip_size_kb": 64, 00:12:35.826 "state": "online", 00:12:35.826 "raid_level": "raid5f", 00:12:35.826 "superblock": true, 00:12:35.826 "num_base_bdevs": 4, 00:12:35.826 "num_base_bdevs_discovered": 4, 00:12:35.826 "num_base_bdevs_operational": 4, 00:12:35.826 "base_bdevs_list": [ 00:12:35.826 { 00:12:35.826 "name": "NewBaseBdev", 00:12:35.826 "uuid": "13cf5e05-67ca-4ee2-a454-bff89ef1d02b", 00:12:35.826 "is_configured": true, 00:12:35.826 "data_offset": 2048, 00:12:35.826 "data_size": 63488 00:12:35.826 }, 00:12:35.826 { 00:12:35.826 "name": "BaseBdev2", 00:12:35.826 "uuid": "bd07adb0-9524-4fe8-8fa7-236fbe66844a", 00:12:35.826 "is_configured": true, 00:12:35.826 "data_offset": 2048, 00:12:35.826 "data_size": 63488 00:12:35.826 }, 00:12:35.826 { 00:12:35.826 "name": "BaseBdev3", 00:12:35.826 "uuid": "6992c427-2d8f-44b9-a078-e63189a3396a", 00:12:35.826 "is_configured": true, 00:12:35.826 "data_offset": 2048, 00:12:35.826 "data_size": 63488 00:12:35.826 }, 00:12:35.826 { 00:12:35.826 "name": "BaseBdev4", 00:12:35.826 "uuid": "2b881f2b-c4e4-44c1-9a19-195454563cf1", 00:12:35.826 "is_configured": true, 00:12:35.826 "data_offset": 2048, 00:12:35.826 "data_size": 63488 00:12:35.826 } 00:12:35.826 ] 00:12:35.826 } 00:12:35.826 } 00:12:35.826 }' 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:35.826 BaseBdev2 00:12:35.826 BaseBdev3 00:12:35.826 BaseBdev4' 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.826 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.084 [2024-11-26 19:53:26.845359] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:36.084 [2024-11-26 19:53:26.845386] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:36.084 [2024-11-26 19:53:26.845452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.084 [2024-11-26 19:53:26.845711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.084 [2024-11-26 19:53:26.845721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81031 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81031 ']' 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81031 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:36.084 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.085 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81031 00:12:36.085 killing process with pid 81031 00:12:36.085 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.085 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.085 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81031' 00:12:36.085 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81031 00:12:36.085 [2024-11-26 19:53:26.875389] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:36.085 19:53:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81031 00:12:36.373 [2024-11-26 19:53:27.079005] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.939 19:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:36.939 00:12:36.939 real 0m8.167s 00:12:36.939 user 0m13.041s 00:12:36.939 sys 0m1.413s 00:12:36.939 ************************************ 00:12:36.939 END TEST raid5f_state_function_test_sb 00:12:36.939 ************************************ 00:12:36.939 19:53:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.939 19:53:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.939 19:53:27 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:12:36.939 19:53:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:36.939 19:53:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.939 19:53:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:36.939 ************************************ 00:12:36.939 START TEST raid5f_superblock_test 00:12:36.939 ************************************ 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81668 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81668 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81668 ']' 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.939 19:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:36.939 [2024-11-26 19:53:27.806751] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:12:36.939 [2024-11-26 19:53:27.806887] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81668 ] 00:12:37.198 [2024-11-26 19:53:27.963266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.198 [2024-11-26 19:53:28.056812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.455 [2024-11-26 19:53:28.175250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.455 [2024-11-26 19:53:28.175470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.713 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.713 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:37.713 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:37.713 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:37.713 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:37.713 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:37.713 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:37.713 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:37.713 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:37.713 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:37.713 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:37.713 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.713 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.972 malloc1 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.972 [2024-11-26 19:53:28.679210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:37.972 [2024-11-26 19:53:28.679378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.972 [2024-11-26 19:53:28.679419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:37.972 [2024-11-26 19:53:28.679784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.972 [2024-11-26 19:53:28.681679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.972 [2024-11-26 19:53:28.681787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:37.972 pt1 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.972 malloc2 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.972 [2024-11-26 19:53:28.712212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:37.972 [2024-11-26 19:53:28.712324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.972 [2024-11-26 19:53:28.712362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:37.972 [2024-11-26 19:53:28.712371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.972 [2024-11-26 19:53:28.714193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.972 [2024-11-26 19:53:28.714217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:37.972 pt2 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.972 malloc3 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.972 [2024-11-26 19:53:28.765913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:37.972 [2024-11-26 19:53:28.765953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.972 [2024-11-26 19:53:28.765970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:37.972 [2024-11-26 19:53:28.765977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.972 [2024-11-26 19:53:28.767793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.972 [2024-11-26 19:53:28.767821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:37.972 pt3 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.972 malloc4 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.972 [2024-11-26 19:53:28.799187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:37.972 [2024-11-26 19:53:28.799222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.972 [2024-11-26 19:53:28.799235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:37.972 [2024-11-26 19:53:28.799242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.972 [2024-11-26 19:53:28.801048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.972 [2024-11-26 19:53:28.801075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:37.972 pt4 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.972 [2024-11-26 19:53:28.807222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:37.972 [2024-11-26 19:53:28.808906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:37.972 [2024-11-26 19:53:28.808973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:37.972 [2024-11-26 19:53:28.809010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:37.972 [2024-11-26 19:53:28.809168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:37.972 [2024-11-26 19:53:28.809180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:37.972 [2024-11-26 19:53:28.809398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:37.972 [2024-11-26 19:53:28.813351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:37.972 [2024-11-26 19:53:28.813367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:37.972 [2024-11-26 19:53:28.813514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.972 "name": "raid_bdev1", 00:12:37.972 "uuid": "7a88593e-54a2-4844-a2b4-a000862e8bf1", 00:12:37.972 "strip_size_kb": 64, 00:12:37.972 "state": "online", 00:12:37.972 "raid_level": "raid5f", 00:12:37.972 "superblock": true, 00:12:37.972 "num_base_bdevs": 4, 00:12:37.972 "num_base_bdevs_discovered": 4, 00:12:37.972 "num_base_bdevs_operational": 4, 00:12:37.972 "base_bdevs_list": [ 00:12:37.972 { 00:12:37.972 "name": "pt1", 00:12:37.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:37.972 "is_configured": true, 00:12:37.972 "data_offset": 2048, 00:12:37.972 "data_size": 63488 00:12:37.972 }, 00:12:37.972 { 00:12:37.972 "name": "pt2", 00:12:37.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.972 "is_configured": true, 00:12:37.972 "data_offset": 2048, 00:12:37.972 "data_size": 63488 00:12:37.972 }, 00:12:37.972 { 00:12:37.972 "name": "pt3", 00:12:37.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:37.972 "is_configured": true, 00:12:37.972 "data_offset": 2048, 00:12:37.972 "data_size": 63488 00:12:37.972 }, 00:12:37.972 { 00:12:37.972 "name": "pt4", 00:12:37.972 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:37.972 "is_configured": true, 00:12:37.972 "data_offset": 2048, 00:12:37.972 "data_size": 63488 00:12:37.972 } 00:12:37.972 ] 00:12:37.972 }' 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.972 19:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.229 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:38.229 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:38.229 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:38.229 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:38.229 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:38.229 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:38.229 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:38.229 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:38.229 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.229 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.229 [2024-11-26 19:53:29.118327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.229 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.229 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:38.229 "name": "raid_bdev1", 00:12:38.229 "aliases": [ 00:12:38.229 "7a88593e-54a2-4844-a2b4-a000862e8bf1" 00:12:38.229 ], 00:12:38.229 "product_name": "Raid Volume", 00:12:38.229 "block_size": 512, 00:12:38.229 "num_blocks": 190464, 00:12:38.229 "uuid": "7a88593e-54a2-4844-a2b4-a000862e8bf1", 00:12:38.229 "assigned_rate_limits": { 00:12:38.229 "rw_ios_per_sec": 0, 00:12:38.229 "rw_mbytes_per_sec": 0, 00:12:38.229 "r_mbytes_per_sec": 0, 00:12:38.229 "w_mbytes_per_sec": 0 00:12:38.229 }, 00:12:38.229 "claimed": false, 00:12:38.229 "zoned": false, 00:12:38.229 "supported_io_types": { 00:12:38.229 "read": true, 00:12:38.229 "write": true, 00:12:38.229 "unmap": false, 00:12:38.229 "flush": false, 00:12:38.229 "reset": true, 00:12:38.229 "nvme_admin": false, 00:12:38.229 "nvme_io": false, 00:12:38.229 "nvme_io_md": false, 00:12:38.229 "write_zeroes": true, 00:12:38.229 "zcopy": false, 00:12:38.229 "get_zone_info": false, 00:12:38.229 "zone_management": false, 00:12:38.229 "zone_append": false, 00:12:38.229 "compare": false, 00:12:38.229 "compare_and_write": false, 00:12:38.229 "abort": false, 00:12:38.229 "seek_hole": false, 00:12:38.229 "seek_data": false, 00:12:38.229 "copy": false, 00:12:38.229 "nvme_iov_md": false 00:12:38.229 }, 00:12:38.229 "driver_specific": { 00:12:38.229 "raid": { 00:12:38.229 "uuid": "7a88593e-54a2-4844-a2b4-a000862e8bf1", 00:12:38.229 "strip_size_kb": 64, 00:12:38.229 "state": "online", 00:12:38.229 "raid_level": "raid5f", 00:12:38.229 "superblock": true, 00:12:38.229 "num_base_bdevs": 4, 00:12:38.229 "num_base_bdevs_discovered": 4, 00:12:38.230 "num_base_bdevs_operational": 4, 00:12:38.230 "base_bdevs_list": [ 00:12:38.230 { 00:12:38.230 "name": "pt1", 00:12:38.230 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:38.230 "is_configured": true, 00:12:38.230 "data_offset": 2048, 00:12:38.230 "data_size": 63488 00:12:38.230 }, 00:12:38.230 { 00:12:38.230 "name": "pt2", 00:12:38.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:38.230 "is_configured": true, 00:12:38.230 "data_offset": 2048, 00:12:38.230 "data_size": 63488 00:12:38.230 }, 00:12:38.230 { 00:12:38.230 "name": "pt3", 00:12:38.230 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:38.230 "is_configured": true, 00:12:38.230 "data_offset": 2048, 00:12:38.230 "data_size": 63488 00:12:38.230 }, 00:12:38.230 { 00:12:38.230 "name": "pt4", 00:12:38.230 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:38.230 "is_configured": true, 00:12:38.230 "data_offset": 2048, 00:12:38.230 "data_size": 63488 00:12:38.230 } 00:12:38.230 ] 00:12:38.230 } 00:12:38.230 } 00:12:38.230 }' 00:12:38.230 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:38.486 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:38.486 pt2 00:12:38.486 pt3 00:12:38.486 pt4' 00:12:38.486 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.486 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.487 [2024-11-26 19:53:29.354332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7a88593e-54a2-4844-a2b4-a000862e8bf1 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7a88593e-54a2-4844-a2b4-a000862e8bf1 ']' 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.487 [2024-11-26 19:53:29.386184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.487 [2024-11-26 19:53:29.386266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.487 [2024-11-26 19:53:29.386404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.487 [2024-11-26 19:53:29.386534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.487 [2024-11-26 19:53:29.386598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.487 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 [2024-11-26 19:53:29.494236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:38.745 [2024-11-26 19:53:29.495955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:38.745 [2024-11-26 19:53:29.495995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:38.745 [2024-11-26 19:53:29.496023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:38.745 [2024-11-26 19:53:29.496064] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:38.745 [2024-11-26 19:53:29.496104] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:38.745 [2024-11-26 19:53:29.496120] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:38.745 [2024-11-26 19:53:29.496135] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:38.745 [2024-11-26 19:53:29.496145] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.745 [2024-11-26 19:53:29.496155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:38.745 request: 00:12:38.745 { 00:12:38.745 "name": "raid_bdev1", 00:12:38.745 "raid_level": "raid5f", 00:12:38.745 "base_bdevs": [ 00:12:38.745 "malloc1", 00:12:38.745 "malloc2", 00:12:38.745 "malloc3", 00:12:38.745 "malloc4" 00:12:38.745 ], 00:12:38.745 "strip_size_kb": 64, 00:12:38.745 "superblock": false, 00:12:38.745 "method": "bdev_raid_create", 00:12:38.745 "req_id": 1 00:12:38.745 } 00:12:38.745 Got JSON-RPC error response 00:12:38.745 response: 00:12:38.745 { 00:12:38.745 "code": -17, 00:12:38.745 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:38.745 } 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 [2024-11-26 19:53:29.538212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:38.745 [2024-11-26 19:53:29.538323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.745 [2024-11-26 19:53:29.538339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:38.745 [2024-11-26 19:53:29.538358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.745 [2024-11-26 19:53:29.540223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.745 [2024-11-26 19:53:29.540250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:38.745 [2024-11-26 19:53:29.540309] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:38.745 [2024-11-26 19:53:29.540358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:38.745 pt1 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.745 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.745 "name": "raid_bdev1", 00:12:38.745 "uuid": "7a88593e-54a2-4844-a2b4-a000862e8bf1", 00:12:38.745 "strip_size_kb": 64, 00:12:38.745 "state": "configuring", 00:12:38.745 "raid_level": "raid5f", 00:12:38.746 "superblock": true, 00:12:38.746 "num_base_bdevs": 4, 00:12:38.746 "num_base_bdevs_discovered": 1, 00:12:38.746 "num_base_bdevs_operational": 4, 00:12:38.746 "base_bdevs_list": [ 00:12:38.746 { 00:12:38.746 "name": "pt1", 00:12:38.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:38.746 "is_configured": true, 00:12:38.746 "data_offset": 2048, 00:12:38.746 "data_size": 63488 00:12:38.746 }, 00:12:38.746 { 00:12:38.746 "name": null, 00:12:38.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:38.746 "is_configured": false, 00:12:38.746 "data_offset": 2048, 00:12:38.746 "data_size": 63488 00:12:38.746 }, 00:12:38.746 { 00:12:38.746 "name": null, 00:12:38.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:38.746 "is_configured": false, 00:12:38.746 "data_offset": 2048, 00:12:38.746 "data_size": 63488 00:12:38.746 }, 00:12:38.746 { 00:12:38.746 "name": null, 00:12:38.746 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:38.746 "is_configured": false, 00:12:38.746 "data_offset": 2048, 00:12:38.746 "data_size": 63488 00:12:38.746 } 00:12:38.746 ] 00:12:38.746 }' 00:12:38.746 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.746 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.002 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:39.002 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:39.002 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.002 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.002 [2024-11-26 19:53:29.862321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:39.002 [2024-11-26 19:53:29.862425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.002 [2024-11-26 19:53:29.862457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:39.002 [2024-11-26 19:53:29.862558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.002 [2024-11-26 19:53:29.862975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.002 [2024-11-26 19:53:29.863064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:39.003 [2024-11-26 19:53:29.863152] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:39.003 [2024-11-26 19:53:29.863176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:39.003 pt2 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.003 [2024-11-26 19:53:29.870308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.003 "name": "raid_bdev1", 00:12:39.003 "uuid": "7a88593e-54a2-4844-a2b4-a000862e8bf1", 00:12:39.003 "strip_size_kb": 64, 00:12:39.003 "state": "configuring", 00:12:39.003 "raid_level": "raid5f", 00:12:39.003 "superblock": true, 00:12:39.003 "num_base_bdevs": 4, 00:12:39.003 "num_base_bdevs_discovered": 1, 00:12:39.003 "num_base_bdevs_operational": 4, 00:12:39.003 "base_bdevs_list": [ 00:12:39.003 { 00:12:39.003 "name": "pt1", 00:12:39.003 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:39.003 "is_configured": true, 00:12:39.003 "data_offset": 2048, 00:12:39.003 "data_size": 63488 00:12:39.003 }, 00:12:39.003 { 00:12:39.003 "name": null, 00:12:39.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:39.003 "is_configured": false, 00:12:39.003 "data_offset": 0, 00:12:39.003 "data_size": 63488 00:12:39.003 }, 00:12:39.003 { 00:12:39.003 "name": null, 00:12:39.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:39.003 "is_configured": false, 00:12:39.003 "data_offset": 2048, 00:12:39.003 "data_size": 63488 00:12:39.003 }, 00:12:39.003 { 00:12:39.003 "name": null, 00:12:39.003 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:39.003 "is_configured": false, 00:12:39.003 "data_offset": 2048, 00:12:39.003 "data_size": 63488 00:12:39.003 } 00:12:39.003 ] 00:12:39.003 }' 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.003 19:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.260 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:39.260 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:39.260 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:39.260 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.260 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.260 [2024-11-26 19:53:30.194373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:39.260 [2024-11-26 19:53:30.194431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.260 [2024-11-26 19:53:30.194449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:39.260 [2024-11-26 19:53:30.194457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.260 [2024-11-26 19:53:30.194851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.260 [2024-11-26 19:53:30.194867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:39.260 [2024-11-26 19:53:30.194940] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:39.260 [2024-11-26 19:53:30.194973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:39.517 pt2 00:12:39.517 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.517 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:39.517 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:39.517 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:39.517 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.517 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.517 [2024-11-26 19:53:30.202354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:39.517 [2024-11-26 19:53:30.202392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.517 [2024-11-26 19:53:30.202410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:39.517 [2024-11-26 19:53:30.202417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.517 [2024-11-26 19:53:30.202722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.517 [2024-11-26 19:53:30.202741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:39.517 [2024-11-26 19:53:30.202791] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:39.517 [2024-11-26 19:53:30.202808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:39.517 pt3 00:12:39.517 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.517 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:39.517 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:39.517 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:39.517 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.517 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.517 [2024-11-26 19:53:30.210324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:39.517 [2024-11-26 19:53:30.210370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.517 [2024-11-26 19:53:30.210383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:39.517 [2024-11-26 19:53:30.210389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.517 [2024-11-26 19:53:30.210697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.517 [2024-11-26 19:53:30.210716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:39.517 [2024-11-26 19:53:30.210764] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:39.517 [2024-11-26 19:53:30.210780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:39.517 [2024-11-26 19:53:30.210892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:39.517 [2024-11-26 19:53:30.210899] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:39.518 [2024-11-26 19:53:30.211100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:39.518 [2024-11-26 19:53:30.214848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:39.518 [2024-11-26 19:53:30.214865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:39.518 [2024-11-26 19:53:30.215010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.518 pt4 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.518 "name": "raid_bdev1", 00:12:39.518 "uuid": "7a88593e-54a2-4844-a2b4-a000862e8bf1", 00:12:39.518 "strip_size_kb": 64, 00:12:39.518 "state": "online", 00:12:39.518 "raid_level": "raid5f", 00:12:39.518 "superblock": true, 00:12:39.518 "num_base_bdevs": 4, 00:12:39.518 "num_base_bdevs_discovered": 4, 00:12:39.518 "num_base_bdevs_operational": 4, 00:12:39.518 "base_bdevs_list": [ 00:12:39.518 { 00:12:39.518 "name": "pt1", 00:12:39.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:39.518 "is_configured": true, 00:12:39.518 "data_offset": 2048, 00:12:39.518 "data_size": 63488 00:12:39.518 }, 00:12:39.518 { 00:12:39.518 "name": "pt2", 00:12:39.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:39.518 "is_configured": true, 00:12:39.518 "data_offset": 2048, 00:12:39.518 "data_size": 63488 00:12:39.518 }, 00:12:39.518 { 00:12:39.518 "name": "pt3", 00:12:39.518 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:39.518 "is_configured": true, 00:12:39.518 "data_offset": 2048, 00:12:39.518 "data_size": 63488 00:12:39.518 }, 00:12:39.518 { 00:12:39.518 "name": "pt4", 00:12:39.518 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:39.518 "is_configured": true, 00:12:39.518 "data_offset": 2048, 00:12:39.518 "data_size": 63488 00:12:39.518 } 00:12:39.518 ] 00:12:39.518 }' 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.518 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.776 [2024-11-26 19:53:30.527742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:39.776 "name": "raid_bdev1", 00:12:39.776 "aliases": [ 00:12:39.776 "7a88593e-54a2-4844-a2b4-a000862e8bf1" 00:12:39.776 ], 00:12:39.776 "product_name": "Raid Volume", 00:12:39.776 "block_size": 512, 00:12:39.776 "num_blocks": 190464, 00:12:39.776 "uuid": "7a88593e-54a2-4844-a2b4-a000862e8bf1", 00:12:39.776 "assigned_rate_limits": { 00:12:39.776 "rw_ios_per_sec": 0, 00:12:39.776 "rw_mbytes_per_sec": 0, 00:12:39.776 "r_mbytes_per_sec": 0, 00:12:39.776 "w_mbytes_per_sec": 0 00:12:39.776 }, 00:12:39.776 "claimed": false, 00:12:39.776 "zoned": false, 00:12:39.776 "supported_io_types": { 00:12:39.776 "read": true, 00:12:39.776 "write": true, 00:12:39.776 "unmap": false, 00:12:39.776 "flush": false, 00:12:39.776 "reset": true, 00:12:39.776 "nvme_admin": false, 00:12:39.776 "nvme_io": false, 00:12:39.776 "nvme_io_md": false, 00:12:39.776 "write_zeroes": true, 00:12:39.776 "zcopy": false, 00:12:39.776 "get_zone_info": false, 00:12:39.776 "zone_management": false, 00:12:39.776 "zone_append": false, 00:12:39.776 "compare": false, 00:12:39.776 "compare_and_write": false, 00:12:39.776 "abort": false, 00:12:39.776 "seek_hole": false, 00:12:39.776 "seek_data": false, 00:12:39.776 "copy": false, 00:12:39.776 "nvme_iov_md": false 00:12:39.776 }, 00:12:39.776 "driver_specific": { 00:12:39.776 "raid": { 00:12:39.776 "uuid": "7a88593e-54a2-4844-a2b4-a000862e8bf1", 00:12:39.776 "strip_size_kb": 64, 00:12:39.776 "state": "online", 00:12:39.776 "raid_level": "raid5f", 00:12:39.776 "superblock": true, 00:12:39.776 "num_base_bdevs": 4, 00:12:39.776 "num_base_bdevs_discovered": 4, 00:12:39.776 "num_base_bdevs_operational": 4, 00:12:39.776 "base_bdevs_list": [ 00:12:39.776 { 00:12:39.776 "name": "pt1", 00:12:39.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:39.776 "is_configured": true, 00:12:39.776 "data_offset": 2048, 00:12:39.776 "data_size": 63488 00:12:39.776 }, 00:12:39.776 { 00:12:39.776 "name": "pt2", 00:12:39.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:39.776 "is_configured": true, 00:12:39.776 "data_offset": 2048, 00:12:39.776 "data_size": 63488 00:12:39.776 }, 00:12:39.776 { 00:12:39.776 "name": "pt3", 00:12:39.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:39.776 "is_configured": true, 00:12:39.776 "data_offset": 2048, 00:12:39.776 "data_size": 63488 00:12:39.776 }, 00:12:39.776 { 00:12:39.776 "name": "pt4", 00:12:39.776 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:39.776 "is_configured": true, 00:12:39.776 "data_offset": 2048, 00:12:39.776 "data_size": 63488 00:12:39.776 } 00:12:39.776 ] 00:12:39.776 } 00:12:39.776 } 00:12:39.776 }' 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:39.776 pt2 00:12:39.776 pt3 00:12:39.776 pt4' 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.776 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:40.036 [2024-11-26 19:53:30.755743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7a88593e-54a2-4844-a2b4-a000862e8bf1 '!=' 7a88593e-54a2-4844-a2b4-a000862e8bf1 ']' 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.036 [2024-11-26 19:53:30.787635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.036 "name": "raid_bdev1", 00:12:40.036 "uuid": "7a88593e-54a2-4844-a2b4-a000862e8bf1", 00:12:40.036 "strip_size_kb": 64, 00:12:40.036 "state": "online", 00:12:40.036 "raid_level": "raid5f", 00:12:40.036 "superblock": true, 00:12:40.036 "num_base_bdevs": 4, 00:12:40.036 "num_base_bdevs_discovered": 3, 00:12:40.036 "num_base_bdevs_operational": 3, 00:12:40.036 "base_bdevs_list": [ 00:12:40.036 { 00:12:40.036 "name": null, 00:12:40.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.036 "is_configured": false, 00:12:40.036 "data_offset": 0, 00:12:40.036 "data_size": 63488 00:12:40.036 }, 00:12:40.036 { 00:12:40.036 "name": "pt2", 00:12:40.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.036 "is_configured": true, 00:12:40.036 "data_offset": 2048, 00:12:40.036 "data_size": 63488 00:12:40.036 }, 00:12:40.036 { 00:12:40.036 "name": "pt3", 00:12:40.036 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.036 "is_configured": true, 00:12:40.036 "data_offset": 2048, 00:12:40.036 "data_size": 63488 00:12:40.036 }, 00:12:40.036 { 00:12:40.036 "name": "pt4", 00:12:40.036 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.036 "is_configured": true, 00:12:40.036 "data_offset": 2048, 00:12:40.036 "data_size": 63488 00:12:40.036 } 00:12:40.036 ] 00:12:40.036 }' 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.036 19:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.295 [2024-11-26 19:53:31.079661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:40.295 [2024-11-26 19:53:31.079688] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.295 [2024-11-26 19:53:31.079760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.295 [2024-11-26 19:53:31.079830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.295 [2024-11-26 19:53:31.079838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.295 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.296 [2024-11-26 19:53:31.143654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:40.296 [2024-11-26 19:53:31.143704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.296 [2024-11-26 19:53:31.143721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:40.296 [2024-11-26 19:53:31.143729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.296 [2024-11-26 19:53:31.145738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.296 [2024-11-26 19:53:31.145871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:40.296 [2024-11-26 19:53:31.145961] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:40.296 [2024-11-26 19:53:31.146005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:40.296 pt2 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.296 "name": "raid_bdev1", 00:12:40.296 "uuid": "7a88593e-54a2-4844-a2b4-a000862e8bf1", 00:12:40.296 "strip_size_kb": 64, 00:12:40.296 "state": "configuring", 00:12:40.296 "raid_level": "raid5f", 00:12:40.296 "superblock": true, 00:12:40.296 "num_base_bdevs": 4, 00:12:40.296 "num_base_bdevs_discovered": 1, 00:12:40.296 "num_base_bdevs_operational": 3, 00:12:40.296 "base_bdevs_list": [ 00:12:40.296 { 00:12:40.296 "name": null, 00:12:40.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.296 "is_configured": false, 00:12:40.296 "data_offset": 2048, 00:12:40.296 "data_size": 63488 00:12:40.296 }, 00:12:40.296 { 00:12:40.296 "name": "pt2", 00:12:40.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.296 "is_configured": true, 00:12:40.296 "data_offset": 2048, 00:12:40.296 "data_size": 63488 00:12:40.296 }, 00:12:40.296 { 00:12:40.296 "name": null, 00:12:40.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.296 "is_configured": false, 00:12:40.296 "data_offset": 2048, 00:12:40.296 "data_size": 63488 00:12:40.296 }, 00:12:40.296 { 00:12:40.296 "name": null, 00:12:40.296 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.296 "is_configured": false, 00:12:40.296 "data_offset": 2048, 00:12:40.296 "data_size": 63488 00:12:40.296 } 00:12:40.296 ] 00:12:40.296 }' 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.296 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.554 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:40.554 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:40.554 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:40.554 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.554 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.554 [2024-11-26 19:53:31.467705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:40.554 [2024-11-26 19:53:31.467768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.554 [2024-11-26 19:53:31.467788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:40.554 [2024-11-26 19:53:31.467795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.554 [2024-11-26 19:53:31.468163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.554 [2024-11-26 19:53:31.468173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:40.554 [2024-11-26 19:53:31.468238] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:40.554 [2024-11-26 19:53:31.468255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:40.554 pt3 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.555 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.813 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.813 "name": "raid_bdev1", 00:12:40.813 "uuid": "7a88593e-54a2-4844-a2b4-a000862e8bf1", 00:12:40.813 "strip_size_kb": 64, 00:12:40.813 "state": "configuring", 00:12:40.813 "raid_level": "raid5f", 00:12:40.813 "superblock": true, 00:12:40.813 "num_base_bdevs": 4, 00:12:40.813 "num_base_bdevs_discovered": 2, 00:12:40.813 "num_base_bdevs_operational": 3, 00:12:40.813 "base_bdevs_list": [ 00:12:40.813 { 00:12:40.813 "name": null, 00:12:40.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.813 "is_configured": false, 00:12:40.813 "data_offset": 2048, 00:12:40.813 "data_size": 63488 00:12:40.813 }, 00:12:40.813 { 00:12:40.813 "name": "pt2", 00:12:40.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.813 "is_configured": true, 00:12:40.813 "data_offset": 2048, 00:12:40.813 "data_size": 63488 00:12:40.813 }, 00:12:40.813 { 00:12:40.813 "name": "pt3", 00:12:40.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.813 "is_configured": true, 00:12:40.813 "data_offset": 2048, 00:12:40.813 "data_size": 63488 00:12:40.813 }, 00:12:40.813 { 00:12:40.813 "name": null, 00:12:40.813 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.813 "is_configured": false, 00:12:40.813 "data_offset": 2048, 00:12:40.813 "data_size": 63488 00:12:40.813 } 00:12:40.813 ] 00:12:40.813 }' 00:12:40.813 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.813 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.073 [2024-11-26 19:53:31.779789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:41.073 [2024-11-26 19:53:31.779947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.073 [2024-11-26 19:53:31.779972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:41.073 [2024-11-26 19:53:31.779982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.073 [2024-11-26 19:53:31.780389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.073 [2024-11-26 19:53:31.780402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:41.073 [2024-11-26 19:53:31.780474] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:41.073 [2024-11-26 19:53:31.780497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:41.073 [2024-11-26 19:53:31.780607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:41.073 [2024-11-26 19:53:31.780615] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:41.073 [2024-11-26 19:53:31.780823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:41.073 [2024-11-26 19:53:31.784599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:41.073 [2024-11-26 19:53:31.784618] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:41.073 [2024-11-26 19:53:31.784859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.073 pt4 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.073 "name": "raid_bdev1", 00:12:41.073 "uuid": "7a88593e-54a2-4844-a2b4-a000862e8bf1", 00:12:41.073 "strip_size_kb": 64, 00:12:41.073 "state": "online", 00:12:41.073 "raid_level": "raid5f", 00:12:41.073 "superblock": true, 00:12:41.073 "num_base_bdevs": 4, 00:12:41.073 "num_base_bdevs_discovered": 3, 00:12:41.073 "num_base_bdevs_operational": 3, 00:12:41.073 "base_bdevs_list": [ 00:12:41.073 { 00:12:41.073 "name": null, 00:12:41.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.073 "is_configured": false, 00:12:41.073 "data_offset": 2048, 00:12:41.073 "data_size": 63488 00:12:41.073 }, 00:12:41.073 { 00:12:41.073 "name": "pt2", 00:12:41.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.073 "is_configured": true, 00:12:41.073 "data_offset": 2048, 00:12:41.073 "data_size": 63488 00:12:41.073 }, 00:12:41.073 { 00:12:41.073 "name": "pt3", 00:12:41.073 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.073 "is_configured": true, 00:12:41.073 "data_offset": 2048, 00:12:41.073 "data_size": 63488 00:12:41.073 }, 00:12:41.073 { 00:12:41.073 "name": "pt4", 00:12:41.073 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:41.073 "is_configured": true, 00:12:41.073 "data_offset": 2048, 00:12:41.073 "data_size": 63488 00:12:41.073 } 00:12:41.073 ] 00:12:41.073 }' 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.073 19:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.332 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:41.332 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.332 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.332 [2024-11-26 19:53:32.093453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.332 [2024-11-26 19:53:32.093477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.332 [2024-11-26 19:53:32.093552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.332 [2024-11-26 19:53:32.093621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.332 [2024-11-26 19:53:32.093632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:41.332 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.332 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:41.332 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.332 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.332 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.332 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.332 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:41.332 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:41.332 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:41.332 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.333 [2024-11-26 19:53:32.145441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:41.333 [2024-11-26 19:53:32.145491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.333 [2024-11-26 19:53:32.145511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:41.333 [2024-11-26 19:53:32.145523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.333 [2024-11-26 19:53:32.147462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.333 [2024-11-26 19:53:32.147491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:41.333 [2024-11-26 19:53:32.147557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:41.333 [2024-11-26 19:53:32.147596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:41.333 [2024-11-26 19:53:32.147691] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:41.333 [2024-11-26 19:53:32.147701] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.333 [2024-11-26 19:53:32.147713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:41.333 [2024-11-26 19:53:32.147756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:41.333 [2024-11-26 19:53:32.147840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:41.333 pt1 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.333 "name": "raid_bdev1", 00:12:41.333 "uuid": "7a88593e-54a2-4844-a2b4-a000862e8bf1", 00:12:41.333 "strip_size_kb": 64, 00:12:41.333 "state": "configuring", 00:12:41.333 "raid_level": "raid5f", 00:12:41.333 "superblock": true, 00:12:41.333 "num_base_bdevs": 4, 00:12:41.333 "num_base_bdevs_discovered": 2, 00:12:41.333 "num_base_bdevs_operational": 3, 00:12:41.333 "base_bdevs_list": [ 00:12:41.333 { 00:12:41.333 "name": null, 00:12:41.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.333 "is_configured": false, 00:12:41.333 "data_offset": 2048, 00:12:41.333 "data_size": 63488 00:12:41.333 }, 00:12:41.333 { 00:12:41.333 "name": "pt2", 00:12:41.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.333 "is_configured": true, 00:12:41.333 "data_offset": 2048, 00:12:41.333 "data_size": 63488 00:12:41.333 }, 00:12:41.333 { 00:12:41.333 "name": "pt3", 00:12:41.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.333 "is_configured": true, 00:12:41.333 "data_offset": 2048, 00:12:41.333 "data_size": 63488 00:12:41.333 }, 00:12:41.333 { 00:12:41.333 "name": null, 00:12:41.333 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:41.333 "is_configured": false, 00:12:41.333 "data_offset": 2048, 00:12:41.333 "data_size": 63488 00:12:41.333 } 00:12:41.333 ] 00:12:41.333 }' 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.333 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.592 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:41.592 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:41.592 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.592 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.592 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.592 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:41.592 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:41.592 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.592 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.592 [2024-11-26 19:53:32.513549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:41.592 [2024-11-26 19:53:32.513706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.592 [2024-11-26 19:53:32.513732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:41.592 [2024-11-26 19:53:32.513740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.592 [2024-11-26 19:53:32.514139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.592 [2024-11-26 19:53:32.514157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:41.592 [2024-11-26 19:53:32.514230] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:41.592 [2024-11-26 19:53:32.514250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:41.592 [2024-11-26 19:53:32.514374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:41.592 [2024-11-26 19:53:32.514381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:41.592 [2024-11-26 19:53:32.514585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:41.592 [2024-11-26 19:53:32.518403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:41.592 [2024-11-26 19:53:32.518421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:41.592 [2024-11-26 19:53:32.518642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.592 pt4 00:12:41.592 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.592 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:41.592 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.592 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.592 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:41.593 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.593 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.593 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.593 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.593 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.593 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.593 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.593 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.593 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.593 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.851 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.851 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.851 "name": "raid_bdev1", 00:12:41.851 "uuid": "7a88593e-54a2-4844-a2b4-a000862e8bf1", 00:12:41.851 "strip_size_kb": 64, 00:12:41.851 "state": "online", 00:12:41.851 "raid_level": "raid5f", 00:12:41.851 "superblock": true, 00:12:41.851 "num_base_bdevs": 4, 00:12:41.851 "num_base_bdevs_discovered": 3, 00:12:41.851 "num_base_bdevs_operational": 3, 00:12:41.851 "base_bdevs_list": [ 00:12:41.851 { 00:12:41.851 "name": null, 00:12:41.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.851 "is_configured": false, 00:12:41.852 "data_offset": 2048, 00:12:41.852 "data_size": 63488 00:12:41.852 }, 00:12:41.852 { 00:12:41.852 "name": "pt2", 00:12:41.852 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.852 "is_configured": true, 00:12:41.852 "data_offset": 2048, 00:12:41.852 "data_size": 63488 00:12:41.852 }, 00:12:41.852 { 00:12:41.852 "name": "pt3", 00:12:41.852 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.852 "is_configured": true, 00:12:41.852 "data_offset": 2048, 00:12:41.852 "data_size": 63488 00:12:41.852 }, 00:12:41.852 { 00:12:41.852 "name": "pt4", 00:12:41.852 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:41.852 "is_configured": true, 00:12:41.852 "data_offset": 2048, 00:12:41.852 "data_size": 63488 00:12:41.852 } 00:12:41.852 ] 00:12:41.852 }' 00:12:41.852 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.852 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.111 [2024-11-26 19:53:32.847282] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7a88593e-54a2-4844-a2b4-a000862e8bf1 '!=' 7a88593e-54a2-4844-a2b4-a000862e8bf1 ']' 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81668 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81668 ']' 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81668 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81668 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81668' 00:12:42.111 killing process with pid 81668 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81668 00:12:42.111 [2024-11-26 19:53:32.897474] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:42.111 19:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81668 00:12:42.111 [2024-11-26 19:53:32.897634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.111 [2024-11-26 19:53:32.897843] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.111 [2024-11-26 19:53:32.897952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:42.370 [2024-11-26 19:53:33.099188] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.935 19:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:42.936 ************************************ 00:12:42.936 END TEST raid5f_superblock_test 00:12:42.936 ************************************ 00:12:42.936 00:12:42.936 real 0m5.957s 00:12:42.936 user 0m9.427s 00:12:42.936 sys 0m1.053s 00:12:42.936 19:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.936 19:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.936 19:53:33 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:12:42.936 19:53:33 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:12:42.936 19:53:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:42.936 19:53:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.936 19:53:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.936 ************************************ 00:12:42.936 START TEST raid5f_rebuild_test 00:12:42.936 ************************************ 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82126 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82126 00:12:42.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82126 ']' 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.936 19:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:42.936 [2024-11-26 19:53:33.820547] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:12:42.936 [2024-11-26 19:53:33.820856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:42.936 Zero copy mechanism will not be used. 00:12:42.936 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82126 ] 00:12:43.194 [2024-11-26 19:53:33.975730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.194 [2024-11-26 19:53:34.073990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.452 [2024-11-26 19:53:34.191888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.452 [2024-11-26 19:53:34.192088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.020 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.020 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:44.020 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:44.020 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:44.020 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.020 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.020 BaseBdev1_malloc 00:12:44.020 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.020 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:44.020 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.020 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.020 [2024-11-26 19:53:34.700635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:44.020 [2024-11-26 19:53:34.700798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.020 [2024-11-26 19:53:34.700823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:44.020 [2024-11-26 19:53:34.700832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.020 [2024-11-26 19:53:34.702641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.020 [2024-11-26 19:53:34.702670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:44.020 BaseBdev1 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 BaseBdev2_malloc 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 [2024-11-26 19:53:34.733517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:44.021 [2024-11-26 19:53:34.733564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.021 [2024-11-26 19:53:34.733583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:44.021 [2024-11-26 19:53:34.733592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.021 [2024-11-26 19:53:34.735432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.021 [2024-11-26 19:53:34.735558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:44.021 BaseBdev2 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 BaseBdev3_malloc 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 [2024-11-26 19:53:34.780681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:44.021 [2024-11-26 19:53:34.780727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.021 [2024-11-26 19:53:34.780745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:44.021 [2024-11-26 19:53:34.780755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.021 [2024-11-26 19:53:34.782590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.021 [2024-11-26 19:53:34.782619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:44.021 BaseBdev3 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 BaseBdev4_malloc 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 [2024-11-26 19:53:34.813694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:44.021 [2024-11-26 19:53:34.813828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.021 [2024-11-26 19:53:34.813847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:44.021 [2024-11-26 19:53:34.813855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.021 [2024-11-26 19:53:34.815608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.021 [2024-11-26 19:53:34.815635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:44.021 BaseBdev4 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 spare_malloc 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 spare_delay 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 [2024-11-26 19:53:34.854597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:44.021 [2024-11-26 19:53:34.854638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.021 [2024-11-26 19:53:34.854652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:44.021 [2024-11-26 19:53:34.854662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.021 [2024-11-26 19:53:34.856468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.021 [2024-11-26 19:53:34.856583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:44.021 spare 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 [2024-11-26 19:53:34.862640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.021 [2024-11-26 19:53:34.864207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.021 [2024-11-26 19:53:34.864334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.021 [2024-11-26 19:53:34.864397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:44.021 [2024-11-26 19:53:34.864467] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:44.021 [2024-11-26 19:53:34.864478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:44.021 [2024-11-26 19:53:34.864690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:44.021 [2024-11-26 19:53:34.868562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:44.021 [2024-11-26 19:53:34.868577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:44.021 [2024-11-26 19:53:34.868721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.021 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.021 "name": "raid_bdev1", 00:12:44.021 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:44.021 "strip_size_kb": 64, 00:12:44.021 "state": "online", 00:12:44.021 "raid_level": "raid5f", 00:12:44.021 "superblock": false, 00:12:44.021 "num_base_bdevs": 4, 00:12:44.021 "num_base_bdevs_discovered": 4, 00:12:44.022 "num_base_bdevs_operational": 4, 00:12:44.022 "base_bdevs_list": [ 00:12:44.022 { 00:12:44.022 "name": "BaseBdev1", 00:12:44.022 "uuid": "1a60a0f1-6f18-58f1-9f46-8a10ef1ea3fd", 00:12:44.022 "is_configured": true, 00:12:44.022 "data_offset": 0, 00:12:44.022 "data_size": 65536 00:12:44.022 }, 00:12:44.022 { 00:12:44.022 "name": "BaseBdev2", 00:12:44.022 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:44.022 "is_configured": true, 00:12:44.022 "data_offset": 0, 00:12:44.022 "data_size": 65536 00:12:44.022 }, 00:12:44.022 { 00:12:44.022 "name": "BaseBdev3", 00:12:44.022 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:44.022 "is_configured": true, 00:12:44.022 "data_offset": 0, 00:12:44.022 "data_size": 65536 00:12:44.022 }, 00:12:44.022 { 00:12:44.022 "name": "BaseBdev4", 00:12:44.022 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:44.022 "is_configured": true, 00:12:44.022 "data_offset": 0, 00:12:44.022 "data_size": 65536 00:12:44.022 } 00:12:44.022 ] 00:12:44.022 }' 00:12:44.022 19:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.022 19:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.279 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:44.279 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:44.279 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.279 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.279 [2024-11-26 19:53:35.169420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.279 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.279 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:12:44.279 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:44.279 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.279 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.279 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:44.538 [2024-11-26 19:53:35.377280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:44.538 /dev/nbd0 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.538 1+0 records in 00:12:44.538 1+0 records out 00:12:44.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317257 s, 12.9 MB/s 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:44.538 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.539 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:44.539 19:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:44.539 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.539 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.539 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:12:44.539 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:12:44.539 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:12:44.539 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:12:45.105 512+0 records in 00:12:45.105 512+0 records out 00:12:45.105 100663296 bytes (101 MB, 96 MiB) copied, 0.474697 s, 212 MB/s 00:12:45.105 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:45.105 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.105 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:45.105 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.105 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:45.105 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.105 19:53:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:45.363 [2024-11-26 19:53:36.122820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.363 [2024-11-26 19:53:36.131588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.363 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.364 "name": "raid_bdev1", 00:12:45.364 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:45.364 "strip_size_kb": 64, 00:12:45.364 "state": "online", 00:12:45.364 "raid_level": "raid5f", 00:12:45.364 "superblock": false, 00:12:45.364 "num_base_bdevs": 4, 00:12:45.364 "num_base_bdevs_discovered": 3, 00:12:45.364 "num_base_bdevs_operational": 3, 00:12:45.364 "base_bdevs_list": [ 00:12:45.364 { 00:12:45.364 "name": null, 00:12:45.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.364 "is_configured": false, 00:12:45.364 "data_offset": 0, 00:12:45.364 "data_size": 65536 00:12:45.364 }, 00:12:45.364 { 00:12:45.364 "name": "BaseBdev2", 00:12:45.364 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:45.364 "is_configured": true, 00:12:45.364 "data_offset": 0, 00:12:45.364 "data_size": 65536 00:12:45.364 }, 00:12:45.364 { 00:12:45.364 "name": "BaseBdev3", 00:12:45.364 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:45.364 "is_configured": true, 00:12:45.364 "data_offset": 0, 00:12:45.364 "data_size": 65536 00:12:45.364 }, 00:12:45.364 { 00:12:45.364 "name": "BaseBdev4", 00:12:45.364 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:45.364 "is_configured": true, 00:12:45.364 "data_offset": 0, 00:12:45.364 "data_size": 65536 00:12:45.364 } 00:12:45.364 ] 00:12:45.364 }' 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.364 19:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.622 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:45.622 19:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.622 19:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.622 [2024-11-26 19:53:36.431648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.622 [2024-11-26 19:53:36.440105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:12:45.622 19:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.622 19:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:45.622 [2024-11-26 19:53:36.445547] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:46.558 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.558 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.558 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.558 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.558 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.558 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.558 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.558 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.558 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.558 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.558 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.558 "name": "raid_bdev1", 00:12:46.558 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:46.558 "strip_size_kb": 64, 00:12:46.558 "state": "online", 00:12:46.558 "raid_level": "raid5f", 00:12:46.558 "superblock": false, 00:12:46.558 "num_base_bdevs": 4, 00:12:46.558 "num_base_bdevs_discovered": 4, 00:12:46.558 "num_base_bdevs_operational": 4, 00:12:46.558 "process": { 00:12:46.558 "type": "rebuild", 00:12:46.558 "target": "spare", 00:12:46.558 "progress": { 00:12:46.558 "blocks": 17280, 00:12:46.558 "percent": 8 00:12:46.558 } 00:12:46.558 }, 00:12:46.558 "base_bdevs_list": [ 00:12:46.558 { 00:12:46.558 "name": "spare", 00:12:46.558 "uuid": "37cbb244-18e5-5d7f-b475-46f711e1d281", 00:12:46.558 "is_configured": true, 00:12:46.558 "data_offset": 0, 00:12:46.558 "data_size": 65536 00:12:46.558 }, 00:12:46.558 { 00:12:46.558 "name": "BaseBdev2", 00:12:46.558 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:46.558 "is_configured": true, 00:12:46.558 "data_offset": 0, 00:12:46.558 "data_size": 65536 00:12:46.558 }, 00:12:46.558 { 00:12:46.558 "name": "BaseBdev3", 00:12:46.558 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:46.558 "is_configured": true, 00:12:46.558 "data_offset": 0, 00:12:46.558 "data_size": 65536 00:12:46.558 }, 00:12:46.558 { 00:12:46.558 "name": "BaseBdev4", 00:12:46.558 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:46.558 "is_configured": true, 00:12:46.558 "data_offset": 0, 00:12:46.558 "data_size": 65536 00:12:46.558 } 00:12:46.558 ] 00:12:46.558 }' 00:12:46.558 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.817 [2024-11-26 19:53:37.542236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.817 [2024-11-26 19:53:37.553870] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:46.817 [2024-11-26 19:53:37.553933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.817 [2024-11-26 19:53:37.553950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.817 [2024-11-26 19:53:37.553962] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.817 "name": "raid_bdev1", 00:12:46.817 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:46.817 "strip_size_kb": 64, 00:12:46.817 "state": "online", 00:12:46.817 "raid_level": "raid5f", 00:12:46.817 "superblock": false, 00:12:46.817 "num_base_bdevs": 4, 00:12:46.817 "num_base_bdevs_discovered": 3, 00:12:46.817 "num_base_bdevs_operational": 3, 00:12:46.817 "base_bdevs_list": [ 00:12:46.817 { 00:12:46.817 "name": null, 00:12:46.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.817 "is_configured": false, 00:12:46.817 "data_offset": 0, 00:12:46.817 "data_size": 65536 00:12:46.817 }, 00:12:46.817 { 00:12:46.817 "name": "BaseBdev2", 00:12:46.817 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:46.817 "is_configured": true, 00:12:46.817 "data_offset": 0, 00:12:46.817 "data_size": 65536 00:12:46.817 }, 00:12:46.817 { 00:12:46.817 "name": "BaseBdev3", 00:12:46.817 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:46.817 "is_configured": true, 00:12:46.817 "data_offset": 0, 00:12:46.817 "data_size": 65536 00:12:46.817 }, 00:12:46.817 { 00:12:46.817 "name": "BaseBdev4", 00:12:46.817 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:46.817 "is_configured": true, 00:12:46.817 "data_offset": 0, 00:12:46.817 "data_size": 65536 00:12:46.817 } 00:12:46.817 ] 00:12:46.817 }' 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.817 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.075 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:47.075 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.075 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:47.075 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:47.075 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.075 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.075 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.075 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.075 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.075 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.075 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.075 "name": "raid_bdev1", 00:12:47.075 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:47.075 "strip_size_kb": 64, 00:12:47.075 "state": "online", 00:12:47.075 "raid_level": "raid5f", 00:12:47.075 "superblock": false, 00:12:47.075 "num_base_bdevs": 4, 00:12:47.075 "num_base_bdevs_discovered": 3, 00:12:47.075 "num_base_bdevs_operational": 3, 00:12:47.075 "base_bdevs_list": [ 00:12:47.075 { 00:12:47.075 "name": null, 00:12:47.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.075 "is_configured": false, 00:12:47.075 "data_offset": 0, 00:12:47.075 "data_size": 65536 00:12:47.075 }, 00:12:47.075 { 00:12:47.075 "name": "BaseBdev2", 00:12:47.075 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:47.075 "is_configured": true, 00:12:47.075 "data_offset": 0, 00:12:47.075 "data_size": 65536 00:12:47.075 }, 00:12:47.075 { 00:12:47.075 "name": "BaseBdev3", 00:12:47.075 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:47.075 "is_configured": true, 00:12:47.075 "data_offset": 0, 00:12:47.075 "data_size": 65536 00:12:47.075 }, 00:12:47.075 { 00:12:47.075 "name": "BaseBdev4", 00:12:47.075 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:47.075 "is_configured": true, 00:12:47.075 "data_offset": 0, 00:12:47.075 "data_size": 65536 00:12:47.075 } 00:12:47.076 ] 00:12:47.076 }' 00:12:47.076 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.076 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:47.076 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.076 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:47.076 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:47.076 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.076 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.076 [2024-11-26 19:53:37.975094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:47.076 [2024-11-26 19:53:37.982959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:12:47.076 19:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.076 19:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:47.076 [2024-11-26 19:53:37.988322] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.447 19:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.447 19:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.447 19:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.447 19:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.447 19:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.447 19:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.447 19:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.447 19:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.447 19:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.447 19:53:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.447 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.447 "name": "raid_bdev1", 00:12:48.447 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:48.447 "strip_size_kb": 64, 00:12:48.447 "state": "online", 00:12:48.447 "raid_level": "raid5f", 00:12:48.447 "superblock": false, 00:12:48.447 "num_base_bdevs": 4, 00:12:48.447 "num_base_bdevs_discovered": 4, 00:12:48.447 "num_base_bdevs_operational": 4, 00:12:48.447 "process": { 00:12:48.447 "type": "rebuild", 00:12:48.447 "target": "spare", 00:12:48.447 "progress": { 00:12:48.447 "blocks": 19200, 00:12:48.447 "percent": 9 00:12:48.447 } 00:12:48.447 }, 00:12:48.447 "base_bdevs_list": [ 00:12:48.447 { 00:12:48.447 "name": "spare", 00:12:48.447 "uuid": "37cbb244-18e5-5d7f-b475-46f711e1d281", 00:12:48.447 "is_configured": true, 00:12:48.447 "data_offset": 0, 00:12:48.447 "data_size": 65536 00:12:48.447 }, 00:12:48.447 { 00:12:48.447 "name": "BaseBdev2", 00:12:48.447 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:48.447 "is_configured": true, 00:12:48.447 "data_offset": 0, 00:12:48.447 "data_size": 65536 00:12:48.447 }, 00:12:48.447 { 00:12:48.447 "name": "BaseBdev3", 00:12:48.447 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:48.447 "is_configured": true, 00:12:48.447 "data_offset": 0, 00:12:48.447 "data_size": 65536 00:12:48.447 }, 00:12:48.447 { 00:12:48.447 "name": "BaseBdev4", 00:12:48.447 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:48.447 "is_configured": true, 00:12:48.447 "data_offset": 0, 00:12:48.447 "data_size": 65536 00:12:48.447 } 00:12:48.447 ] 00:12:48.447 }' 00:12:48.447 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.447 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.447 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.447 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.447 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:48.447 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:48.447 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:12:48.447 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=489 00:12:48.447 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.448 "name": "raid_bdev1", 00:12:48.448 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:48.448 "strip_size_kb": 64, 00:12:48.448 "state": "online", 00:12:48.448 "raid_level": "raid5f", 00:12:48.448 "superblock": false, 00:12:48.448 "num_base_bdevs": 4, 00:12:48.448 "num_base_bdevs_discovered": 4, 00:12:48.448 "num_base_bdevs_operational": 4, 00:12:48.448 "process": { 00:12:48.448 "type": "rebuild", 00:12:48.448 "target": "spare", 00:12:48.448 "progress": { 00:12:48.448 "blocks": 21120, 00:12:48.448 "percent": 10 00:12:48.448 } 00:12:48.448 }, 00:12:48.448 "base_bdevs_list": [ 00:12:48.448 { 00:12:48.448 "name": "spare", 00:12:48.448 "uuid": "37cbb244-18e5-5d7f-b475-46f711e1d281", 00:12:48.448 "is_configured": true, 00:12:48.448 "data_offset": 0, 00:12:48.448 "data_size": 65536 00:12:48.448 }, 00:12:48.448 { 00:12:48.448 "name": "BaseBdev2", 00:12:48.448 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:48.448 "is_configured": true, 00:12:48.448 "data_offset": 0, 00:12:48.448 "data_size": 65536 00:12:48.448 }, 00:12:48.448 { 00:12:48.448 "name": "BaseBdev3", 00:12:48.448 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:48.448 "is_configured": true, 00:12:48.448 "data_offset": 0, 00:12:48.448 "data_size": 65536 00:12:48.448 }, 00:12:48.448 { 00:12:48.448 "name": "BaseBdev4", 00:12:48.448 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:48.448 "is_configured": true, 00:12:48.448 "data_offset": 0, 00:12:48.448 "data_size": 65536 00:12:48.448 } 00:12:48.448 ] 00:12:48.448 }' 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.448 19:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.381 19:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.381 19:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.381 19:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.381 19:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.381 19:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.381 19:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.381 19:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.381 19:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.381 19:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.381 19:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.381 19:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.381 19:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.381 "name": "raid_bdev1", 00:12:49.381 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:49.381 "strip_size_kb": 64, 00:12:49.381 "state": "online", 00:12:49.381 "raid_level": "raid5f", 00:12:49.381 "superblock": false, 00:12:49.381 "num_base_bdevs": 4, 00:12:49.381 "num_base_bdevs_discovered": 4, 00:12:49.381 "num_base_bdevs_operational": 4, 00:12:49.381 "process": { 00:12:49.381 "type": "rebuild", 00:12:49.381 "target": "spare", 00:12:49.381 "progress": { 00:12:49.382 "blocks": 42240, 00:12:49.382 "percent": 21 00:12:49.382 } 00:12:49.382 }, 00:12:49.382 "base_bdevs_list": [ 00:12:49.382 { 00:12:49.382 "name": "spare", 00:12:49.382 "uuid": "37cbb244-18e5-5d7f-b475-46f711e1d281", 00:12:49.382 "is_configured": true, 00:12:49.382 "data_offset": 0, 00:12:49.382 "data_size": 65536 00:12:49.382 }, 00:12:49.382 { 00:12:49.382 "name": "BaseBdev2", 00:12:49.382 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:49.382 "is_configured": true, 00:12:49.382 "data_offset": 0, 00:12:49.382 "data_size": 65536 00:12:49.382 }, 00:12:49.382 { 00:12:49.382 "name": "BaseBdev3", 00:12:49.382 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:49.382 "is_configured": true, 00:12:49.382 "data_offset": 0, 00:12:49.382 "data_size": 65536 00:12:49.382 }, 00:12:49.382 { 00:12:49.382 "name": "BaseBdev4", 00:12:49.382 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:49.382 "is_configured": true, 00:12:49.382 "data_offset": 0, 00:12:49.382 "data_size": 65536 00:12:49.382 } 00:12:49.382 ] 00:12:49.382 }' 00:12:49.382 19:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.382 19:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.382 19:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.382 19:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.382 19:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.781 "name": "raid_bdev1", 00:12:50.781 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:50.781 "strip_size_kb": 64, 00:12:50.781 "state": "online", 00:12:50.781 "raid_level": "raid5f", 00:12:50.781 "superblock": false, 00:12:50.781 "num_base_bdevs": 4, 00:12:50.781 "num_base_bdevs_discovered": 4, 00:12:50.781 "num_base_bdevs_operational": 4, 00:12:50.781 "process": { 00:12:50.781 "type": "rebuild", 00:12:50.781 "target": "spare", 00:12:50.781 "progress": { 00:12:50.781 "blocks": 63360, 00:12:50.781 "percent": 32 00:12:50.781 } 00:12:50.781 }, 00:12:50.781 "base_bdevs_list": [ 00:12:50.781 { 00:12:50.781 "name": "spare", 00:12:50.781 "uuid": "37cbb244-18e5-5d7f-b475-46f711e1d281", 00:12:50.781 "is_configured": true, 00:12:50.781 "data_offset": 0, 00:12:50.781 "data_size": 65536 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "name": "BaseBdev2", 00:12:50.781 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:50.781 "is_configured": true, 00:12:50.781 "data_offset": 0, 00:12:50.781 "data_size": 65536 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "name": "BaseBdev3", 00:12:50.781 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:50.781 "is_configured": true, 00:12:50.781 "data_offset": 0, 00:12:50.781 "data_size": 65536 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "name": "BaseBdev4", 00:12:50.781 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:50.781 "is_configured": true, 00:12:50.781 "data_offset": 0, 00:12:50.781 "data_size": 65536 00:12:50.781 } 00:12:50.781 ] 00:12:50.781 }' 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.781 19:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.715 "name": "raid_bdev1", 00:12:51.715 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:51.715 "strip_size_kb": 64, 00:12:51.715 "state": "online", 00:12:51.715 "raid_level": "raid5f", 00:12:51.715 "superblock": false, 00:12:51.715 "num_base_bdevs": 4, 00:12:51.715 "num_base_bdevs_discovered": 4, 00:12:51.715 "num_base_bdevs_operational": 4, 00:12:51.715 "process": { 00:12:51.715 "type": "rebuild", 00:12:51.715 "target": "spare", 00:12:51.715 "progress": { 00:12:51.715 "blocks": 84480, 00:12:51.715 "percent": 42 00:12:51.715 } 00:12:51.715 }, 00:12:51.715 "base_bdevs_list": [ 00:12:51.715 { 00:12:51.715 "name": "spare", 00:12:51.715 "uuid": "37cbb244-18e5-5d7f-b475-46f711e1d281", 00:12:51.715 "is_configured": true, 00:12:51.715 "data_offset": 0, 00:12:51.715 "data_size": 65536 00:12:51.715 }, 00:12:51.715 { 00:12:51.715 "name": "BaseBdev2", 00:12:51.715 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:51.715 "is_configured": true, 00:12:51.715 "data_offset": 0, 00:12:51.715 "data_size": 65536 00:12:51.715 }, 00:12:51.715 { 00:12:51.715 "name": "BaseBdev3", 00:12:51.715 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:51.715 "is_configured": true, 00:12:51.715 "data_offset": 0, 00:12:51.715 "data_size": 65536 00:12:51.715 }, 00:12:51.715 { 00:12:51.715 "name": "BaseBdev4", 00:12:51.715 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:51.715 "is_configured": true, 00:12:51.715 "data_offset": 0, 00:12:51.715 "data_size": 65536 00:12:51.715 } 00:12:51.715 ] 00:12:51.715 }' 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.715 19:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:52.648 19:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.648 19:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.648 19:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.648 19:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.648 19:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.648 19:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.648 19:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.648 19:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.648 19:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.648 19:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.648 19:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.648 19:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.648 "name": "raid_bdev1", 00:12:52.648 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:52.648 "strip_size_kb": 64, 00:12:52.648 "state": "online", 00:12:52.648 "raid_level": "raid5f", 00:12:52.648 "superblock": false, 00:12:52.648 "num_base_bdevs": 4, 00:12:52.648 "num_base_bdevs_discovered": 4, 00:12:52.648 "num_base_bdevs_operational": 4, 00:12:52.648 "process": { 00:12:52.648 "type": "rebuild", 00:12:52.648 "target": "spare", 00:12:52.648 "progress": { 00:12:52.648 "blocks": 105600, 00:12:52.648 "percent": 53 00:12:52.648 } 00:12:52.648 }, 00:12:52.648 "base_bdevs_list": [ 00:12:52.648 { 00:12:52.648 "name": "spare", 00:12:52.648 "uuid": "37cbb244-18e5-5d7f-b475-46f711e1d281", 00:12:52.648 "is_configured": true, 00:12:52.648 "data_offset": 0, 00:12:52.648 "data_size": 65536 00:12:52.648 }, 00:12:52.648 { 00:12:52.648 "name": "BaseBdev2", 00:12:52.648 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:52.648 "is_configured": true, 00:12:52.648 "data_offset": 0, 00:12:52.648 "data_size": 65536 00:12:52.648 }, 00:12:52.648 { 00:12:52.648 "name": "BaseBdev3", 00:12:52.648 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:52.648 "is_configured": true, 00:12:52.648 "data_offset": 0, 00:12:52.648 "data_size": 65536 00:12:52.648 }, 00:12:52.648 { 00:12:52.648 "name": "BaseBdev4", 00:12:52.648 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:52.648 "is_configured": true, 00:12:52.648 "data_offset": 0, 00:12:52.648 "data_size": 65536 00:12:52.648 } 00:12:52.648 ] 00:12:52.648 }' 00:12:52.648 19:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.906 19:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.906 19:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.906 19:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.906 19:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.842 "name": "raid_bdev1", 00:12:53.842 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:53.842 "strip_size_kb": 64, 00:12:53.842 "state": "online", 00:12:53.842 "raid_level": "raid5f", 00:12:53.842 "superblock": false, 00:12:53.842 "num_base_bdevs": 4, 00:12:53.842 "num_base_bdevs_discovered": 4, 00:12:53.842 "num_base_bdevs_operational": 4, 00:12:53.842 "process": { 00:12:53.842 "type": "rebuild", 00:12:53.842 "target": "spare", 00:12:53.842 "progress": { 00:12:53.842 "blocks": 126720, 00:12:53.842 "percent": 64 00:12:53.842 } 00:12:53.842 }, 00:12:53.842 "base_bdevs_list": [ 00:12:53.842 { 00:12:53.842 "name": "spare", 00:12:53.842 "uuid": "37cbb244-18e5-5d7f-b475-46f711e1d281", 00:12:53.842 "is_configured": true, 00:12:53.842 "data_offset": 0, 00:12:53.842 "data_size": 65536 00:12:53.842 }, 00:12:53.842 { 00:12:53.842 "name": "BaseBdev2", 00:12:53.842 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:53.842 "is_configured": true, 00:12:53.842 "data_offset": 0, 00:12:53.842 "data_size": 65536 00:12:53.842 }, 00:12:53.842 { 00:12:53.842 "name": "BaseBdev3", 00:12:53.842 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:53.842 "is_configured": true, 00:12:53.842 "data_offset": 0, 00:12:53.842 "data_size": 65536 00:12:53.842 }, 00:12:53.842 { 00:12:53.842 "name": "BaseBdev4", 00:12:53.842 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:53.842 "is_configured": true, 00:12:53.842 "data_offset": 0, 00:12:53.842 "data_size": 65536 00:12:53.842 } 00:12:53.842 ] 00:12:53.842 }' 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.842 19:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.215 "name": "raid_bdev1", 00:12:55.215 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:55.215 "strip_size_kb": 64, 00:12:55.215 "state": "online", 00:12:55.215 "raid_level": "raid5f", 00:12:55.215 "superblock": false, 00:12:55.215 "num_base_bdevs": 4, 00:12:55.215 "num_base_bdevs_discovered": 4, 00:12:55.215 "num_base_bdevs_operational": 4, 00:12:55.215 "process": { 00:12:55.215 "type": "rebuild", 00:12:55.215 "target": "spare", 00:12:55.215 "progress": { 00:12:55.215 "blocks": 147840, 00:12:55.215 "percent": 75 00:12:55.215 } 00:12:55.215 }, 00:12:55.215 "base_bdevs_list": [ 00:12:55.215 { 00:12:55.215 "name": "spare", 00:12:55.215 "uuid": "37cbb244-18e5-5d7f-b475-46f711e1d281", 00:12:55.215 "is_configured": true, 00:12:55.215 "data_offset": 0, 00:12:55.215 "data_size": 65536 00:12:55.215 }, 00:12:55.215 { 00:12:55.215 "name": "BaseBdev2", 00:12:55.215 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:55.215 "is_configured": true, 00:12:55.215 "data_offset": 0, 00:12:55.215 "data_size": 65536 00:12:55.215 }, 00:12:55.215 { 00:12:55.215 "name": "BaseBdev3", 00:12:55.215 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:55.215 "is_configured": true, 00:12:55.215 "data_offset": 0, 00:12:55.215 "data_size": 65536 00:12:55.215 }, 00:12:55.215 { 00:12:55.215 "name": "BaseBdev4", 00:12:55.215 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:55.215 "is_configured": true, 00:12:55.215 "data_offset": 0, 00:12:55.215 "data_size": 65536 00:12:55.215 } 00:12:55.215 ] 00:12:55.215 }' 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.215 19:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:56.148 19:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.148 19:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.148 19:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.148 19:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.148 19:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.148 19:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.148 19:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.148 19:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.148 19:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.148 19:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.148 19:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.148 19:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.148 "name": "raid_bdev1", 00:12:56.148 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:56.148 "strip_size_kb": 64, 00:12:56.148 "state": "online", 00:12:56.148 "raid_level": "raid5f", 00:12:56.148 "superblock": false, 00:12:56.148 "num_base_bdevs": 4, 00:12:56.148 "num_base_bdevs_discovered": 4, 00:12:56.148 "num_base_bdevs_operational": 4, 00:12:56.148 "process": { 00:12:56.148 "type": "rebuild", 00:12:56.148 "target": "spare", 00:12:56.148 "progress": { 00:12:56.148 "blocks": 167040, 00:12:56.148 "percent": 84 00:12:56.148 } 00:12:56.148 }, 00:12:56.148 "base_bdevs_list": [ 00:12:56.148 { 00:12:56.148 "name": "spare", 00:12:56.148 "uuid": "37cbb244-18e5-5d7f-b475-46f711e1d281", 00:12:56.148 "is_configured": true, 00:12:56.148 "data_offset": 0, 00:12:56.148 "data_size": 65536 00:12:56.148 }, 00:12:56.148 { 00:12:56.148 "name": "BaseBdev2", 00:12:56.148 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:56.149 "is_configured": true, 00:12:56.149 "data_offset": 0, 00:12:56.149 "data_size": 65536 00:12:56.149 }, 00:12:56.149 { 00:12:56.149 "name": "BaseBdev3", 00:12:56.149 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:56.149 "is_configured": true, 00:12:56.149 "data_offset": 0, 00:12:56.149 "data_size": 65536 00:12:56.149 }, 00:12:56.149 { 00:12:56.149 "name": "BaseBdev4", 00:12:56.149 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:56.149 "is_configured": true, 00:12:56.149 "data_offset": 0, 00:12:56.149 "data_size": 65536 00:12:56.149 } 00:12:56.149 ] 00:12:56.149 }' 00:12:56.149 19:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.149 19:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.149 19:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.149 19:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.149 19:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:57.081 19:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.081 19:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.081 19:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.081 19:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.081 19:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.081 19:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.081 19:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.081 19:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.081 19:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.081 19:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.081 19:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.081 19:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.081 "name": "raid_bdev1", 00:12:57.081 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:57.081 "strip_size_kb": 64, 00:12:57.081 "state": "online", 00:12:57.081 "raid_level": "raid5f", 00:12:57.081 "superblock": false, 00:12:57.081 "num_base_bdevs": 4, 00:12:57.081 "num_base_bdevs_discovered": 4, 00:12:57.081 "num_base_bdevs_operational": 4, 00:12:57.081 "process": { 00:12:57.081 "type": "rebuild", 00:12:57.081 "target": "spare", 00:12:57.081 "progress": { 00:12:57.081 "blocks": 188160, 00:12:57.081 "percent": 95 00:12:57.081 } 00:12:57.081 }, 00:12:57.081 "base_bdevs_list": [ 00:12:57.081 { 00:12:57.081 "name": "spare", 00:12:57.081 "uuid": "37cbb244-18e5-5d7f-b475-46f711e1d281", 00:12:57.081 "is_configured": true, 00:12:57.081 "data_offset": 0, 00:12:57.081 "data_size": 65536 00:12:57.081 }, 00:12:57.081 { 00:12:57.081 "name": "BaseBdev2", 00:12:57.081 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:57.081 "is_configured": true, 00:12:57.081 "data_offset": 0, 00:12:57.081 "data_size": 65536 00:12:57.081 }, 00:12:57.081 { 00:12:57.081 "name": "BaseBdev3", 00:12:57.081 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:57.081 "is_configured": true, 00:12:57.081 "data_offset": 0, 00:12:57.081 "data_size": 65536 00:12:57.081 }, 00:12:57.081 { 00:12:57.081 "name": "BaseBdev4", 00:12:57.082 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:57.082 "is_configured": true, 00:12:57.082 "data_offset": 0, 00:12:57.082 "data_size": 65536 00:12:57.082 } 00:12:57.082 ] 00:12:57.082 }' 00:12:57.082 19:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.082 19:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.082 19:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.339 19:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.339 19:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:57.597 [2024-11-26 19:53:48.364176] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:57.597 [2024-11-26 19:53:48.364245] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:57.597 [2024-11-26 19:53:48.364290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.162 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:58.162 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.162 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.162 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.162 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.162 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.162 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.162 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.162 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.162 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.162 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.162 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.162 "name": "raid_bdev1", 00:12:58.162 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:58.162 "strip_size_kb": 64, 00:12:58.162 "state": "online", 00:12:58.162 "raid_level": "raid5f", 00:12:58.162 "superblock": false, 00:12:58.162 "num_base_bdevs": 4, 00:12:58.162 "num_base_bdevs_discovered": 4, 00:12:58.162 "num_base_bdevs_operational": 4, 00:12:58.162 "base_bdevs_list": [ 00:12:58.162 { 00:12:58.162 "name": "spare", 00:12:58.162 "uuid": "37cbb244-18e5-5d7f-b475-46f711e1d281", 00:12:58.162 "is_configured": true, 00:12:58.162 "data_offset": 0, 00:12:58.162 "data_size": 65536 00:12:58.162 }, 00:12:58.162 { 00:12:58.162 "name": "BaseBdev2", 00:12:58.162 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:58.162 "is_configured": true, 00:12:58.162 "data_offset": 0, 00:12:58.162 "data_size": 65536 00:12:58.162 }, 00:12:58.162 { 00:12:58.162 "name": "BaseBdev3", 00:12:58.162 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:58.162 "is_configured": true, 00:12:58.162 "data_offset": 0, 00:12:58.162 "data_size": 65536 00:12:58.162 }, 00:12:58.162 { 00:12:58.162 "name": "BaseBdev4", 00:12:58.162 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:58.162 "is_configured": true, 00:12:58.162 "data_offset": 0, 00:12:58.162 "data_size": 65536 00:12:58.162 } 00:12:58.162 ] 00:12:58.162 }' 00:12:58.162 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.420 "name": "raid_bdev1", 00:12:58.420 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:58.420 "strip_size_kb": 64, 00:12:58.420 "state": "online", 00:12:58.420 "raid_level": "raid5f", 00:12:58.420 "superblock": false, 00:12:58.420 "num_base_bdevs": 4, 00:12:58.420 "num_base_bdevs_discovered": 4, 00:12:58.420 "num_base_bdevs_operational": 4, 00:12:58.420 "base_bdevs_list": [ 00:12:58.420 { 00:12:58.420 "name": "spare", 00:12:58.420 "uuid": "37cbb244-18e5-5d7f-b475-46f711e1d281", 00:12:58.420 "is_configured": true, 00:12:58.420 "data_offset": 0, 00:12:58.420 "data_size": 65536 00:12:58.420 }, 00:12:58.420 { 00:12:58.420 "name": "BaseBdev2", 00:12:58.420 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:58.420 "is_configured": true, 00:12:58.420 "data_offset": 0, 00:12:58.420 "data_size": 65536 00:12:58.420 }, 00:12:58.420 { 00:12:58.420 "name": "BaseBdev3", 00:12:58.420 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:58.420 "is_configured": true, 00:12:58.420 "data_offset": 0, 00:12:58.420 "data_size": 65536 00:12:58.420 }, 00:12:58.420 { 00:12:58.420 "name": "BaseBdev4", 00:12:58.420 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:58.420 "is_configured": true, 00:12:58.420 "data_offset": 0, 00:12:58.420 "data_size": 65536 00:12:58.420 } 00:12:58.420 ] 00:12:58.420 }' 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.420 "name": "raid_bdev1", 00:12:58.420 "uuid": "663c7b8c-d1c5-4115-b32d-e2ea7ab35bfd", 00:12:58.420 "strip_size_kb": 64, 00:12:58.420 "state": "online", 00:12:58.420 "raid_level": "raid5f", 00:12:58.420 "superblock": false, 00:12:58.420 "num_base_bdevs": 4, 00:12:58.420 "num_base_bdevs_discovered": 4, 00:12:58.420 "num_base_bdevs_operational": 4, 00:12:58.420 "base_bdevs_list": [ 00:12:58.420 { 00:12:58.420 "name": "spare", 00:12:58.420 "uuid": "37cbb244-18e5-5d7f-b475-46f711e1d281", 00:12:58.420 "is_configured": true, 00:12:58.420 "data_offset": 0, 00:12:58.420 "data_size": 65536 00:12:58.420 }, 00:12:58.420 { 00:12:58.420 "name": "BaseBdev2", 00:12:58.420 "uuid": "0fc9ba9a-c363-5dd9-87d6-f5b692f8bb7c", 00:12:58.420 "is_configured": true, 00:12:58.420 "data_offset": 0, 00:12:58.420 "data_size": 65536 00:12:58.420 }, 00:12:58.420 { 00:12:58.420 "name": "BaseBdev3", 00:12:58.420 "uuid": "ea908879-5095-59a8-8466-cce63384c92d", 00:12:58.420 "is_configured": true, 00:12:58.420 "data_offset": 0, 00:12:58.420 "data_size": 65536 00:12:58.420 }, 00:12:58.420 { 00:12:58.420 "name": "BaseBdev4", 00:12:58.420 "uuid": "57359082-ac15-56e1-8a80-c7c9d6649ede", 00:12:58.420 "is_configured": true, 00:12:58.420 "data_offset": 0, 00:12:58.420 "data_size": 65536 00:12:58.420 } 00:12:58.420 ] 00:12:58.420 }' 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.420 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.676 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.677 [2024-11-26 19:53:49.529315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.677 [2024-11-26 19:53:49.529356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.677 [2024-11-26 19:53:49.529438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.677 [2024-11-26 19:53:49.529523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.677 [2024-11-26 19:53:49.529533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:58.677 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:58.934 /dev/nbd0 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.934 1+0 records in 00:12:58.934 1+0 records out 00:12:58.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055528 s, 7.4 MB/s 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:58.934 19:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:59.192 /dev/nbd1 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.192 1+0 records in 00:12:59.192 1+0 records out 00:12:59.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283362 s, 14.5 MB/s 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:59.192 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.450 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82126 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82126 ']' 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82126 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.708 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82126 00:12:59.708 killing process with pid 82126 00:12:59.708 Received shutdown signal, test time was about 60.000000 seconds 00:12:59.708 00:12:59.709 Latency(us) 00:12:59.709 [2024-11-26T19:53:50.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.709 [2024-11-26T19:53:50.646Z] =================================================================================================================== 00:12:59.709 [2024-11-26T19:53:50.646Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:59.709 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.709 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.709 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82126' 00:12:59.709 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82126 00:12:59.709 [2024-11-26 19:53:50.597668] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.709 19:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82126 00:13:00.273 [2024-11-26 19:53:50.915946] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:00.840 00:13:00.840 real 0m17.937s 00:13:00.840 user 0m20.890s 00:13:00.840 sys 0m1.754s 00:13:00.840 ************************************ 00:13:00.840 END TEST raid5f_rebuild_test 00:13:00.840 ************************************ 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.840 19:53:51 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:13:00.840 19:53:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:00.840 19:53:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.840 19:53:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:00.840 ************************************ 00:13:00.840 START TEST raid5f_rebuild_test_sb 00:13:00.840 ************************************ 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:00.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82631 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82631 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82631 ']' 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.840 19:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:01.098 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:01.098 Zero copy mechanism will not be used. 00:13:01.098 [2024-11-26 19:53:51.793176] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:13:01.098 [2024-11-26 19:53:51.793308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82631 ] 00:13:01.098 [2024-11-26 19:53:51.954108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.356 [2024-11-26 19:53:52.067113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.356 [2024-11-26 19:53:52.213835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.356 [2024-11-26 19:53:52.213889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.923 BaseBdev1_malloc 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.923 [2024-11-26 19:53:52.623380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:01.923 [2024-11-26 19:53:52.623443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.923 [2024-11-26 19:53:52.623468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:01.923 [2024-11-26 19:53:52.623480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.923 [2024-11-26 19:53:52.625718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.923 [2024-11-26 19:53:52.625756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:01.923 BaseBdev1 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.923 BaseBdev2_malloc 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.923 [2024-11-26 19:53:52.665074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:01.923 [2024-11-26 19:53:52.665251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.923 [2024-11-26 19:53:52.665279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:01.923 [2024-11-26 19:53:52.665291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.923 [2024-11-26 19:53:52.667515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.923 [2024-11-26 19:53:52.667549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:01.923 BaseBdev2 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.923 BaseBdev3_malloc 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.923 [2024-11-26 19:53:52.723630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:01.923 [2024-11-26 19:53:52.723802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.923 [2024-11-26 19:53:52.723833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:01.923 [2024-11-26 19:53:52.723845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.923 [2024-11-26 19:53:52.726032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.923 [2024-11-26 19:53:52.726069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:01.923 BaseBdev3 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.923 BaseBdev4_malloc 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.923 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.923 [2024-11-26 19:53:52.761563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:01.923 [2024-11-26 19:53:52.761713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.923 [2024-11-26 19:53:52.761754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:01.923 [2024-11-26 19:53:52.761811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.924 [2024-11-26 19:53:52.763992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.924 [2024-11-26 19:53:52.764104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:01.924 BaseBdev4 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.924 spare_malloc 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.924 spare_delay 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.924 [2024-11-26 19:53:52.811423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:01.924 [2024-11-26 19:53:52.811467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.924 [2024-11-26 19:53:52.811484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:01.924 [2024-11-26 19:53:52.811495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.924 [2024-11-26 19:53:52.813680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.924 [2024-11-26 19:53:52.813714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:01.924 spare 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.924 [2024-11-26 19:53:52.819483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.924 [2024-11-26 19:53:52.821413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.924 [2024-11-26 19:53:52.821475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.924 [2024-11-26 19:53:52.821529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:01.924 [2024-11-26 19:53:52.821715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:01.924 [2024-11-26 19:53:52.821730] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:01.924 [2024-11-26 19:53:52.821982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:01.924 [2024-11-26 19:53:52.827092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:01.924 [2024-11-26 19:53:52.827185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:01.924 [2024-11-26 19:53:52.827436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.924 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.183 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.183 "name": "raid_bdev1", 00:13:02.183 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:02.183 "strip_size_kb": 64, 00:13:02.183 "state": "online", 00:13:02.183 "raid_level": "raid5f", 00:13:02.183 "superblock": true, 00:13:02.183 "num_base_bdevs": 4, 00:13:02.183 "num_base_bdevs_discovered": 4, 00:13:02.183 "num_base_bdevs_operational": 4, 00:13:02.183 "base_bdevs_list": [ 00:13:02.183 { 00:13:02.183 "name": "BaseBdev1", 00:13:02.183 "uuid": "eeb8f47a-bcb5-548a-bfea-07e12b938ebf", 00:13:02.183 "is_configured": true, 00:13:02.183 "data_offset": 2048, 00:13:02.183 "data_size": 63488 00:13:02.183 }, 00:13:02.183 { 00:13:02.183 "name": "BaseBdev2", 00:13:02.183 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:02.183 "is_configured": true, 00:13:02.183 "data_offset": 2048, 00:13:02.183 "data_size": 63488 00:13:02.183 }, 00:13:02.183 { 00:13:02.183 "name": "BaseBdev3", 00:13:02.183 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:02.183 "is_configured": true, 00:13:02.183 "data_offset": 2048, 00:13:02.183 "data_size": 63488 00:13:02.183 }, 00:13:02.183 { 00:13:02.183 "name": "BaseBdev4", 00:13:02.183 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:02.183 "is_configured": true, 00:13:02.183 "data_offset": 2048, 00:13:02.183 "data_size": 63488 00:13:02.183 } 00:13:02.183 ] 00:13:02.183 }' 00:13:02.183 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.183 19:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:02.441 [2024-11-26 19:53:53.149401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.441 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:02.700 [2024-11-26 19:53:53.409258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:02.700 /dev/nbd0 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.700 1+0 records in 00:13:02.700 1+0 records out 00:13:02.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277013 s, 14.8 MB/s 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:13:02.700 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:13:03.267 496+0 records in 00:13:03.267 496+0 records out 00:13:03.267 97517568 bytes (98 MB, 93 MiB) copied, 0.489942 s, 199 MB/s 00:13:03.267 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:03.267 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.267 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:03.267 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.267 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:03.267 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.267 19:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.267 [2024-11-26 19:53:54.170583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.267 [2024-11-26 19:53:54.184383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.267 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.526 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.526 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.526 "name": "raid_bdev1", 00:13:03.526 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:03.526 "strip_size_kb": 64, 00:13:03.526 "state": "online", 00:13:03.526 "raid_level": "raid5f", 00:13:03.526 "superblock": true, 00:13:03.526 "num_base_bdevs": 4, 00:13:03.526 "num_base_bdevs_discovered": 3, 00:13:03.526 "num_base_bdevs_operational": 3, 00:13:03.526 "base_bdevs_list": [ 00:13:03.526 { 00:13:03.526 "name": null, 00:13:03.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.526 "is_configured": false, 00:13:03.526 "data_offset": 0, 00:13:03.526 "data_size": 63488 00:13:03.526 }, 00:13:03.526 { 00:13:03.526 "name": "BaseBdev2", 00:13:03.526 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:03.526 "is_configured": true, 00:13:03.526 "data_offset": 2048, 00:13:03.526 "data_size": 63488 00:13:03.526 }, 00:13:03.526 { 00:13:03.526 "name": "BaseBdev3", 00:13:03.526 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:03.526 "is_configured": true, 00:13:03.526 "data_offset": 2048, 00:13:03.526 "data_size": 63488 00:13:03.526 }, 00:13:03.526 { 00:13:03.526 "name": "BaseBdev4", 00:13:03.526 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:03.526 "is_configured": true, 00:13:03.526 "data_offset": 2048, 00:13:03.526 "data_size": 63488 00:13:03.526 } 00:13:03.526 ] 00:13:03.526 }' 00:13:03.526 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.526 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.785 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:03.785 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.785 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.785 [2024-11-26 19:53:54.500468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:03.785 [2024-11-26 19:53:54.510942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:13:03.785 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.785 19:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:03.785 [2024-11-26 19:53:54.517838] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:04.719 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.719 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.719 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.719 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.719 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.719 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.719 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.719 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.719 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.719 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.719 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.719 "name": "raid_bdev1", 00:13:04.719 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:04.719 "strip_size_kb": 64, 00:13:04.719 "state": "online", 00:13:04.719 "raid_level": "raid5f", 00:13:04.719 "superblock": true, 00:13:04.719 "num_base_bdevs": 4, 00:13:04.719 "num_base_bdevs_discovered": 4, 00:13:04.719 "num_base_bdevs_operational": 4, 00:13:04.719 "process": { 00:13:04.719 "type": "rebuild", 00:13:04.719 "target": "spare", 00:13:04.719 "progress": { 00:13:04.719 "blocks": 17280, 00:13:04.719 "percent": 9 00:13:04.719 } 00:13:04.719 }, 00:13:04.720 "base_bdevs_list": [ 00:13:04.720 { 00:13:04.720 "name": "spare", 00:13:04.720 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:04.720 "is_configured": true, 00:13:04.720 "data_offset": 2048, 00:13:04.720 "data_size": 63488 00:13:04.720 }, 00:13:04.720 { 00:13:04.720 "name": "BaseBdev2", 00:13:04.720 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:04.720 "is_configured": true, 00:13:04.720 "data_offset": 2048, 00:13:04.720 "data_size": 63488 00:13:04.720 }, 00:13:04.720 { 00:13:04.720 "name": "BaseBdev3", 00:13:04.720 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:04.720 "is_configured": true, 00:13:04.720 "data_offset": 2048, 00:13:04.720 "data_size": 63488 00:13:04.720 }, 00:13:04.720 { 00:13:04.720 "name": "BaseBdev4", 00:13:04.720 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:04.720 "is_configured": true, 00:13:04.720 "data_offset": 2048, 00:13:04.720 "data_size": 63488 00:13:04.720 } 00:13:04.720 ] 00:13:04.720 }' 00:13:04.720 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.720 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.720 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.720 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.720 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:04.720 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.720 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.720 [2024-11-26 19:53:55.631132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.978 [2024-11-26 19:53:55.728088] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:04.978 [2024-11-26 19:53:55.728310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.978 [2024-11-26 19:53:55.728385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.978 [2024-11-26 19:53:55.728411] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.978 "name": "raid_bdev1", 00:13:04.978 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:04.978 "strip_size_kb": 64, 00:13:04.978 "state": "online", 00:13:04.978 "raid_level": "raid5f", 00:13:04.978 "superblock": true, 00:13:04.978 "num_base_bdevs": 4, 00:13:04.978 "num_base_bdevs_discovered": 3, 00:13:04.978 "num_base_bdevs_operational": 3, 00:13:04.978 "base_bdevs_list": [ 00:13:04.978 { 00:13:04.978 "name": null, 00:13:04.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.978 "is_configured": false, 00:13:04.978 "data_offset": 0, 00:13:04.978 "data_size": 63488 00:13:04.978 }, 00:13:04.978 { 00:13:04.978 "name": "BaseBdev2", 00:13:04.978 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:04.978 "is_configured": true, 00:13:04.978 "data_offset": 2048, 00:13:04.978 "data_size": 63488 00:13:04.978 }, 00:13:04.978 { 00:13:04.978 "name": "BaseBdev3", 00:13:04.978 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:04.978 "is_configured": true, 00:13:04.978 "data_offset": 2048, 00:13:04.978 "data_size": 63488 00:13:04.978 }, 00:13:04.978 { 00:13:04.978 "name": "BaseBdev4", 00:13:04.978 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:04.978 "is_configured": true, 00:13:04.978 "data_offset": 2048, 00:13:04.978 "data_size": 63488 00:13:04.978 } 00:13:04.978 ] 00:13:04.978 }' 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.978 19:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.236 "name": "raid_bdev1", 00:13:05.236 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:05.236 "strip_size_kb": 64, 00:13:05.236 "state": "online", 00:13:05.236 "raid_level": "raid5f", 00:13:05.236 "superblock": true, 00:13:05.236 "num_base_bdevs": 4, 00:13:05.236 "num_base_bdevs_discovered": 3, 00:13:05.236 "num_base_bdevs_operational": 3, 00:13:05.236 "base_bdevs_list": [ 00:13:05.236 { 00:13:05.236 "name": null, 00:13:05.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.236 "is_configured": false, 00:13:05.236 "data_offset": 0, 00:13:05.236 "data_size": 63488 00:13:05.236 }, 00:13:05.236 { 00:13:05.236 "name": "BaseBdev2", 00:13:05.236 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:05.236 "is_configured": true, 00:13:05.236 "data_offset": 2048, 00:13:05.236 "data_size": 63488 00:13:05.236 }, 00:13:05.236 { 00:13:05.236 "name": "BaseBdev3", 00:13:05.236 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:05.236 "is_configured": true, 00:13:05.236 "data_offset": 2048, 00:13:05.236 "data_size": 63488 00:13:05.236 }, 00:13:05.236 { 00:13:05.236 "name": "BaseBdev4", 00:13:05.236 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:05.236 "is_configured": true, 00:13:05.236 "data_offset": 2048, 00:13:05.236 "data_size": 63488 00:13:05.236 } 00:13:05.236 ] 00:13:05.236 }' 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.236 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.236 [2024-11-26 19:53:56.169888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.493 [2024-11-26 19:53:56.177651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:13:05.494 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.494 19:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:05.494 [2024-11-26 19:53:56.183168] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.426 "name": "raid_bdev1", 00:13:06.426 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:06.426 "strip_size_kb": 64, 00:13:06.426 "state": "online", 00:13:06.426 "raid_level": "raid5f", 00:13:06.426 "superblock": true, 00:13:06.426 "num_base_bdevs": 4, 00:13:06.426 "num_base_bdevs_discovered": 4, 00:13:06.426 "num_base_bdevs_operational": 4, 00:13:06.426 "process": { 00:13:06.426 "type": "rebuild", 00:13:06.426 "target": "spare", 00:13:06.426 "progress": { 00:13:06.426 "blocks": 19200, 00:13:06.426 "percent": 10 00:13:06.426 } 00:13:06.426 }, 00:13:06.426 "base_bdevs_list": [ 00:13:06.426 { 00:13:06.426 "name": "spare", 00:13:06.426 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:06.426 "is_configured": true, 00:13:06.426 "data_offset": 2048, 00:13:06.426 "data_size": 63488 00:13:06.426 }, 00:13:06.426 { 00:13:06.426 "name": "BaseBdev2", 00:13:06.426 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:06.426 "is_configured": true, 00:13:06.426 "data_offset": 2048, 00:13:06.426 "data_size": 63488 00:13:06.426 }, 00:13:06.426 { 00:13:06.426 "name": "BaseBdev3", 00:13:06.426 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:06.426 "is_configured": true, 00:13:06.426 "data_offset": 2048, 00:13:06.426 "data_size": 63488 00:13:06.426 }, 00:13:06.426 { 00:13:06.426 "name": "BaseBdev4", 00:13:06.426 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:06.426 "is_configured": true, 00:13:06.426 "data_offset": 2048, 00:13:06.426 "data_size": 63488 00:13:06.426 } 00:13:06.426 ] 00:13:06.426 }' 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:06.426 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=507 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.426 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.426 "name": "raid_bdev1", 00:13:06.426 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:06.426 "strip_size_kb": 64, 00:13:06.426 "state": "online", 00:13:06.426 "raid_level": "raid5f", 00:13:06.426 "superblock": true, 00:13:06.426 "num_base_bdevs": 4, 00:13:06.426 "num_base_bdevs_discovered": 4, 00:13:06.426 "num_base_bdevs_operational": 4, 00:13:06.427 "process": { 00:13:06.427 "type": "rebuild", 00:13:06.427 "target": "spare", 00:13:06.427 "progress": { 00:13:06.427 "blocks": 21120, 00:13:06.427 "percent": 11 00:13:06.427 } 00:13:06.427 }, 00:13:06.427 "base_bdevs_list": [ 00:13:06.427 { 00:13:06.427 "name": "spare", 00:13:06.427 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:06.427 "is_configured": true, 00:13:06.427 "data_offset": 2048, 00:13:06.427 "data_size": 63488 00:13:06.427 }, 00:13:06.427 { 00:13:06.427 "name": "BaseBdev2", 00:13:06.427 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:06.427 "is_configured": true, 00:13:06.427 "data_offset": 2048, 00:13:06.427 "data_size": 63488 00:13:06.427 }, 00:13:06.427 { 00:13:06.427 "name": "BaseBdev3", 00:13:06.427 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:06.427 "is_configured": true, 00:13:06.427 "data_offset": 2048, 00:13:06.427 "data_size": 63488 00:13:06.427 }, 00:13:06.427 { 00:13:06.427 "name": "BaseBdev4", 00:13:06.427 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:06.427 "is_configured": true, 00:13:06.427 "data_offset": 2048, 00:13:06.427 "data_size": 63488 00:13:06.427 } 00:13:06.427 ] 00:13:06.427 }' 00:13:06.427 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.427 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.427 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.684 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.684 19:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:07.617 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.617 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.617 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.617 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.617 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.617 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.617 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.617 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.617 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.617 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.617 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.617 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.617 "name": "raid_bdev1", 00:13:07.617 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:07.617 "strip_size_kb": 64, 00:13:07.617 "state": "online", 00:13:07.617 "raid_level": "raid5f", 00:13:07.617 "superblock": true, 00:13:07.617 "num_base_bdevs": 4, 00:13:07.617 "num_base_bdevs_discovered": 4, 00:13:07.617 "num_base_bdevs_operational": 4, 00:13:07.617 "process": { 00:13:07.617 "type": "rebuild", 00:13:07.617 "target": "spare", 00:13:07.617 "progress": { 00:13:07.617 "blocks": 40320, 00:13:07.617 "percent": 21 00:13:07.617 } 00:13:07.617 }, 00:13:07.617 "base_bdevs_list": [ 00:13:07.617 { 00:13:07.617 "name": "spare", 00:13:07.617 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:07.617 "is_configured": true, 00:13:07.617 "data_offset": 2048, 00:13:07.617 "data_size": 63488 00:13:07.617 }, 00:13:07.617 { 00:13:07.617 "name": "BaseBdev2", 00:13:07.617 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:07.617 "is_configured": true, 00:13:07.617 "data_offset": 2048, 00:13:07.617 "data_size": 63488 00:13:07.617 }, 00:13:07.617 { 00:13:07.617 "name": "BaseBdev3", 00:13:07.617 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:07.617 "is_configured": true, 00:13:07.617 "data_offset": 2048, 00:13:07.617 "data_size": 63488 00:13:07.617 }, 00:13:07.617 { 00:13:07.617 "name": "BaseBdev4", 00:13:07.617 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:07.617 "is_configured": true, 00:13:07.617 "data_offset": 2048, 00:13:07.617 "data_size": 63488 00:13:07.617 } 00:13:07.617 ] 00:13:07.617 }' 00:13:07.618 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.618 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.618 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.618 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.618 19:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:08.589 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.589 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.589 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.589 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.589 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.589 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.589 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.589 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.589 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.589 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.589 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.589 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.589 "name": "raid_bdev1", 00:13:08.589 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:08.589 "strip_size_kb": 64, 00:13:08.589 "state": "online", 00:13:08.589 "raid_level": "raid5f", 00:13:08.589 "superblock": true, 00:13:08.589 "num_base_bdevs": 4, 00:13:08.589 "num_base_bdevs_discovered": 4, 00:13:08.589 "num_base_bdevs_operational": 4, 00:13:08.589 "process": { 00:13:08.589 "type": "rebuild", 00:13:08.589 "target": "spare", 00:13:08.589 "progress": { 00:13:08.589 "blocks": 61440, 00:13:08.589 "percent": 32 00:13:08.589 } 00:13:08.589 }, 00:13:08.589 "base_bdevs_list": [ 00:13:08.589 { 00:13:08.589 "name": "spare", 00:13:08.589 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:08.589 "is_configured": true, 00:13:08.589 "data_offset": 2048, 00:13:08.589 "data_size": 63488 00:13:08.589 }, 00:13:08.589 { 00:13:08.589 "name": "BaseBdev2", 00:13:08.589 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:08.589 "is_configured": true, 00:13:08.589 "data_offset": 2048, 00:13:08.589 "data_size": 63488 00:13:08.589 }, 00:13:08.589 { 00:13:08.589 "name": "BaseBdev3", 00:13:08.589 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:08.589 "is_configured": true, 00:13:08.589 "data_offset": 2048, 00:13:08.589 "data_size": 63488 00:13:08.589 }, 00:13:08.589 { 00:13:08.589 "name": "BaseBdev4", 00:13:08.589 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:08.589 "is_configured": true, 00:13:08.589 "data_offset": 2048, 00:13:08.589 "data_size": 63488 00:13:08.589 } 00:13:08.589 ] 00:13:08.589 }' 00:13:08.589 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.848 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.848 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.848 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.848 19:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.783 "name": "raid_bdev1", 00:13:09.783 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:09.783 "strip_size_kb": 64, 00:13:09.783 "state": "online", 00:13:09.783 "raid_level": "raid5f", 00:13:09.783 "superblock": true, 00:13:09.783 "num_base_bdevs": 4, 00:13:09.783 "num_base_bdevs_discovered": 4, 00:13:09.783 "num_base_bdevs_operational": 4, 00:13:09.783 "process": { 00:13:09.783 "type": "rebuild", 00:13:09.783 "target": "spare", 00:13:09.783 "progress": { 00:13:09.783 "blocks": 82560, 00:13:09.783 "percent": 43 00:13:09.783 } 00:13:09.783 }, 00:13:09.783 "base_bdevs_list": [ 00:13:09.783 { 00:13:09.783 "name": "spare", 00:13:09.783 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:09.783 "is_configured": true, 00:13:09.783 "data_offset": 2048, 00:13:09.783 "data_size": 63488 00:13:09.783 }, 00:13:09.783 { 00:13:09.783 "name": "BaseBdev2", 00:13:09.783 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:09.783 "is_configured": true, 00:13:09.783 "data_offset": 2048, 00:13:09.783 "data_size": 63488 00:13:09.783 }, 00:13:09.783 { 00:13:09.783 "name": "BaseBdev3", 00:13:09.783 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:09.783 "is_configured": true, 00:13:09.783 "data_offset": 2048, 00:13:09.783 "data_size": 63488 00:13:09.783 }, 00:13:09.783 { 00:13:09.783 "name": "BaseBdev4", 00:13:09.783 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:09.783 "is_configured": true, 00:13:09.783 "data_offset": 2048, 00:13:09.783 "data_size": 63488 00:13:09.783 } 00:13:09.783 ] 00:13:09.783 }' 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.783 19:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:11.158 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:11.158 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.158 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.158 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.158 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.158 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.158 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.158 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.158 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.158 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.158 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.158 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.158 "name": "raid_bdev1", 00:13:11.158 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:11.158 "strip_size_kb": 64, 00:13:11.158 "state": "online", 00:13:11.158 "raid_level": "raid5f", 00:13:11.158 "superblock": true, 00:13:11.158 "num_base_bdevs": 4, 00:13:11.158 "num_base_bdevs_discovered": 4, 00:13:11.158 "num_base_bdevs_operational": 4, 00:13:11.158 "process": { 00:13:11.158 "type": "rebuild", 00:13:11.158 "target": "spare", 00:13:11.158 "progress": { 00:13:11.158 "blocks": 103680, 00:13:11.158 "percent": 54 00:13:11.158 } 00:13:11.158 }, 00:13:11.158 "base_bdevs_list": [ 00:13:11.158 { 00:13:11.158 "name": "spare", 00:13:11.158 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:11.158 "is_configured": true, 00:13:11.158 "data_offset": 2048, 00:13:11.158 "data_size": 63488 00:13:11.158 }, 00:13:11.158 { 00:13:11.158 "name": "BaseBdev2", 00:13:11.158 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:11.158 "is_configured": true, 00:13:11.158 "data_offset": 2048, 00:13:11.158 "data_size": 63488 00:13:11.158 }, 00:13:11.158 { 00:13:11.158 "name": "BaseBdev3", 00:13:11.158 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:11.158 "is_configured": true, 00:13:11.158 "data_offset": 2048, 00:13:11.158 "data_size": 63488 00:13:11.158 }, 00:13:11.158 { 00:13:11.158 "name": "BaseBdev4", 00:13:11.158 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:11.158 "is_configured": true, 00:13:11.158 "data_offset": 2048, 00:13:11.158 "data_size": 63488 00:13:11.159 } 00:13:11.159 ] 00:13:11.159 }' 00:13:11.159 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.159 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.159 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.159 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.159 19:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:12.093 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.093 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.093 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.093 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.093 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.093 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.094 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.094 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.094 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.094 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.094 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.094 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.094 "name": "raid_bdev1", 00:13:12.094 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:12.094 "strip_size_kb": 64, 00:13:12.094 "state": "online", 00:13:12.094 "raid_level": "raid5f", 00:13:12.094 "superblock": true, 00:13:12.094 "num_base_bdevs": 4, 00:13:12.094 "num_base_bdevs_discovered": 4, 00:13:12.094 "num_base_bdevs_operational": 4, 00:13:12.094 "process": { 00:13:12.094 "type": "rebuild", 00:13:12.094 "target": "spare", 00:13:12.094 "progress": { 00:13:12.094 "blocks": 124800, 00:13:12.094 "percent": 65 00:13:12.094 } 00:13:12.094 }, 00:13:12.094 "base_bdevs_list": [ 00:13:12.094 { 00:13:12.094 "name": "spare", 00:13:12.094 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:12.094 "is_configured": true, 00:13:12.094 "data_offset": 2048, 00:13:12.094 "data_size": 63488 00:13:12.094 }, 00:13:12.094 { 00:13:12.094 "name": "BaseBdev2", 00:13:12.094 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:12.094 "is_configured": true, 00:13:12.094 "data_offset": 2048, 00:13:12.094 "data_size": 63488 00:13:12.094 }, 00:13:12.094 { 00:13:12.094 "name": "BaseBdev3", 00:13:12.094 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:12.094 "is_configured": true, 00:13:12.094 "data_offset": 2048, 00:13:12.094 "data_size": 63488 00:13:12.094 }, 00:13:12.094 { 00:13:12.094 "name": "BaseBdev4", 00:13:12.094 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:12.094 "is_configured": true, 00:13:12.094 "data_offset": 2048, 00:13:12.094 "data_size": 63488 00:13:12.094 } 00:13:12.094 ] 00:13:12.094 }' 00:13:12.094 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.094 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.094 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.094 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.094 19:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.029 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:13.029 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.029 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.029 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.029 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.029 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.029 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.029 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.029 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.029 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.029 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.029 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.029 "name": "raid_bdev1", 00:13:13.029 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:13.029 "strip_size_kb": 64, 00:13:13.029 "state": "online", 00:13:13.029 "raid_level": "raid5f", 00:13:13.029 "superblock": true, 00:13:13.029 "num_base_bdevs": 4, 00:13:13.029 "num_base_bdevs_discovered": 4, 00:13:13.029 "num_base_bdevs_operational": 4, 00:13:13.029 "process": { 00:13:13.029 "type": "rebuild", 00:13:13.029 "target": "spare", 00:13:13.029 "progress": { 00:13:13.029 "blocks": 145920, 00:13:13.029 "percent": 76 00:13:13.029 } 00:13:13.029 }, 00:13:13.029 "base_bdevs_list": [ 00:13:13.029 { 00:13:13.029 "name": "spare", 00:13:13.029 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:13.029 "is_configured": true, 00:13:13.029 "data_offset": 2048, 00:13:13.029 "data_size": 63488 00:13:13.029 }, 00:13:13.029 { 00:13:13.029 "name": "BaseBdev2", 00:13:13.029 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:13.029 "is_configured": true, 00:13:13.029 "data_offset": 2048, 00:13:13.029 "data_size": 63488 00:13:13.029 }, 00:13:13.029 { 00:13:13.029 "name": "BaseBdev3", 00:13:13.029 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:13.029 "is_configured": true, 00:13:13.029 "data_offset": 2048, 00:13:13.029 "data_size": 63488 00:13:13.029 }, 00:13:13.029 { 00:13:13.029 "name": "BaseBdev4", 00:13:13.029 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:13.029 "is_configured": true, 00:13:13.029 "data_offset": 2048, 00:13:13.029 "data_size": 63488 00:13:13.029 } 00:13:13.029 ] 00:13:13.029 }' 00:13:13.029 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.288 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.288 19:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.288 19:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.288 19:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.223 "name": "raid_bdev1", 00:13:14.223 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:14.223 "strip_size_kb": 64, 00:13:14.223 "state": "online", 00:13:14.223 "raid_level": "raid5f", 00:13:14.223 "superblock": true, 00:13:14.223 "num_base_bdevs": 4, 00:13:14.223 "num_base_bdevs_discovered": 4, 00:13:14.223 "num_base_bdevs_operational": 4, 00:13:14.223 "process": { 00:13:14.223 "type": "rebuild", 00:13:14.223 "target": "spare", 00:13:14.223 "progress": { 00:13:14.223 "blocks": 167040, 00:13:14.223 "percent": 87 00:13:14.223 } 00:13:14.223 }, 00:13:14.223 "base_bdevs_list": [ 00:13:14.223 { 00:13:14.223 "name": "spare", 00:13:14.223 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:14.223 "is_configured": true, 00:13:14.223 "data_offset": 2048, 00:13:14.223 "data_size": 63488 00:13:14.223 }, 00:13:14.223 { 00:13:14.223 "name": "BaseBdev2", 00:13:14.223 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:14.223 "is_configured": true, 00:13:14.223 "data_offset": 2048, 00:13:14.223 "data_size": 63488 00:13:14.223 }, 00:13:14.223 { 00:13:14.223 "name": "BaseBdev3", 00:13:14.223 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:14.223 "is_configured": true, 00:13:14.223 "data_offset": 2048, 00:13:14.223 "data_size": 63488 00:13:14.223 }, 00:13:14.223 { 00:13:14.223 "name": "BaseBdev4", 00:13:14.223 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:14.223 "is_configured": true, 00:13:14.223 "data_offset": 2048, 00:13:14.223 "data_size": 63488 00:13:14.223 } 00:13:14.223 ] 00:13:14.223 }' 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.223 19:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.598 "name": "raid_bdev1", 00:13:15.598 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:15.598 "strip_size_kb": 64, 00:13:15.598 "state": "online", 00:13:15.598 "raid_level": "raid5f", 00:13:15.598 "superblock": true, 00:13:15.598 "num_base_bdevs": 4, 00:13:15.598 "num_base_bdevs_discovered": 4, 00:13:15.598 "num_base_bdevs_operational": 4, 00:13:15.598 "process": { 00:13:15.598 "type": "rebuild", 00:13:15.598 "target": "spare", 00:13:15.598 "progress": { 00:13:15.598 "blocks": 188160, 00:13:15.598 "percent": 98 00:13:15.598 } 00:13:15.598 }, 00:13:15.598 "base_bdevs_list": [ 00:13:15.598 { 00:13:15.598 "name": "spare", 00:13:15.598 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:15.598 "is_configured": true, 00:13:15.598 "data_offset": 2048, 00:13:15.598 "data_size": 63488 00:13:15.598 }, 00:13:15.598 { 00:13:15.598 "name": "BaseBdev2", 00:13:15.598 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:15.598 "is_configured": true, 00:13:15.598 "data_offset": 2048, 00:13:15.598 "data_size": 63488 00:13:15.598 }, 00:13:15.598 { 00:13:15.598 "name": "BaseBdev3", 00:13:15.598 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:15.598 "is_configured": true, 00:13:15.598 "data_offset": 2048, 00:13:15.598 "data_size": 63488 00:13:15.598 }, 00:13:15.598 { 00:13:15.598 "name": "BaseBdev4", 00:13:15.598 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:15.598 "is_configured": true, 00:13:15.598 "data_offset": 2048, 00:13:15.598 "data_size": 63488 00:13:15.598 } 00:13:15.598 ] 00:13:15.598 }' 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.598 19:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.598 [2024-11-26 19:54:06.260800] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:15.598 [2024-11-26 19:54:06.260862] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:15.598 [2024-11-26 19:54:06.260995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.534 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:16.534 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.534 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.534 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.534 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.534 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.534 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.534 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.534 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.534 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.534 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.534 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.534 "name": "raid_bdev1", 00:13:16.534 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:16.534 "strip_size_kb": 64, 00:13:16.534 "state": "online", 00:13:16.534 "raid_level": "raid5f", 00:13:16.534 "superblock": true, 00:13:16.534 "num_base_bdevs": 4, 00:13:16.534 "num_base_bdevs_discovered": 4, 00:13:16.534 "num_base_bdevs_operational": 4, 00:13:16.534 "base_bdevs_list": [ 00:13:16.534 { 00:13:16.534 "name": "spare", 00:13:16.534 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:16.534 "is_configured": true, 00:13:16.534 "data_offset": 2048, 00:13:16.534 "data_size": 63488 00:13:16.534 }, 00:13:16.534 { 00:13:16.534 "name": "BaseBdev2", 00:13:16.534 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:16.534 "is_configured": true, 00:13:16.534 "data_offset": 2048, 00:13:16.534 "data_size": 63488 00:13:16.534 }, 00:13:16.534 { 00:13:16.534 "name": "BaseBdev3", 00:13:16.534 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:16.534 "is_configured": true, 00:13:16.534 "data_offset": 2048, 00:13:16.534 "data_size": 63488 00:13:16.534 }, 00:13:16.534 { 00:13:16.534 "name": "BaseBdev4", 00:13:16.534 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:16.534 "is_configured": true, 00:13:16.534 "data_offset": 2048, 00:13:16.534 "data_size": 63488 00:13:16.534 } 00:13:16.534 ] 00:13:16.534 }' 00:13:16.534 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.534 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.535 "name": "raid_bdev1", 00:13:16.535 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:16.535 "strip_size_kb": 64, 00:13:16.535 "state": "online", 00:13:16.535 "raid_level": "raid5f", 00:13:16.535 "superblock": true, 00:13:16.535 "num_base_bdevs": 4, 00:13:16.535 "num_base_bdevs_discovered": 4, 00:13:16.535 "num_base_bdevs_operational": 4, 00:13:16.535 "base_bdevs_list": [ 00:13:16.535 { 00:13:16.535 "name": "spare", 00:13:16.535 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:16.535 "is_configured": true, 00:13:16.535 "data_offset": 2048, 00:13:16.535 "data_size": 63488 00:13:16.535 }, 00:13:16.535 { 00:13:16.535 "name": "BaseBdev2", 00:13:16.535 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:16.535 "is_configured": true, 00:13:16.535 "data_offset": 2048, 00:13:16.535 "data_size": 63488 00:13:16.535 }, 00:13:16.535 { 00:13:16.535 "name": "BaseBdev3", 00:13:16.535 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:16.535 "is_configured": true, 00:13:16.535 "data_offset": 2048, 00:13:16.535 "data_size": 63488 00:13:16.535 }, 00:13:16.535 { 00:13:16.535 "name": "BaseBdev4", 00:13:16.535 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:16.535 "is_configured": true, 00:13:16.535 "data_offset": 2048, 00:13:16.535 "data_size": 63488 00:13:16.535 } 00:13:16.535 ] 00:13:16.535 }' 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.535 "name": "raid_bdev1", 00:13:16.535 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:16.535 "strip_size_kb": 64, 00:13:16.535 "state": "online", 00:13:16.535 "raid_level": "raid5f", 00:13:16.535 "superblock": true, 00:13:16.535 "num_base_bdevs": 4, 00:13:16.535 "num_base_bdevs_discovered": 4, 00:13:16.535 "num_base_bdevs_operational": 4, 00:13:16.535 "base_bdevs_list": [ 00:13:16.535 { 00:13:16.535 "name": "spare", 00:13:16.535 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:16.535 "is_configured": true, 00:13:16.535 "data_offset": 2048, 00:13:16.535 "data_size": 63488 00:13:16.535 }, 00:13:16.535 { 00:13:16.535 "name": "BaseBdev2", 00:13:16.535 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:16.535 "is_configured": true, 00:13:16.535 "data_offset": 2048, 00:13:16.535 "data_size": 63488 00:13:16.535 }, 00:13:16.535 { 00:13:16.535 "name": "BaseBdev3", 00:13:16.535 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:16.535 "is_configured": true, 00:13:16.535 "data_offset": 2048, 00:13:16.535 "data_size": 63488 00:13:16.535 }, 00:13:16.535 { 00:13:16.535 "name": "BaseBdev4", 00:13:16.535 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:16.535 "is_configured": true, 00:13:16.535 "data_offset": 2048, 00:13:16.535 "data_size": 63488 00:13:16.535 } 00:13:16.535 ] 00:13:16.535 }' 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.535 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 [2024-11-26 19:54:07.754035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:17.102 [2024-11-26 19:54:07.754073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.102 [2024-11-26 19:54:07.754158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.102 [2024-11-26 19:54:07.754250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.102 [2024-11-26 19:54:07.754262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:17.102 19:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:17.102 /dev/nbd0 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.102 1+0 records in 00:13:17.102 1+0 records out 00:13:17.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236409 s, 17.3 MB/s 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:17.102 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:17.361 /dev/nbd1 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.361 1+0 records in 00:13:17.361 1+0 records out 00:13:17.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231025 s, 17.7 MB/s 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:17.361 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:17.620 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:17.620 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.620 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:17.620 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.620 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:17.620 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.620 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.879 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.138 [2024-11-26 19:54:08.814397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:18.138 [2024-11-26 19:54:08.814447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.138 [2024-11-26 19:54:08.814469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:18.138 [2024-11-26 19:54:08.814477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.138 [2024-11-26 19:54:08.816494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.138 [2024-11-26 19:54:08.816528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:18.138 [2024-11-26 19:54:08.816612] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:18.138 [2024-11-26 19:54:08.816656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.138 [2024-11-26 19:54:08.816771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:18.138 [2024-11-26 19:54:08.816844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:18.138 [2024-11-26 19:54:08.816906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:18.138 spare 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.138 [2024-11-26 19:54:08.916982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:18.138 [2024-11-26 19:54:08.917007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:18.138 [2024-11-26 19:54:08.917235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:13:18.138 [2024-11-26 19:54:08.920835] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:18.138 [2024-11-26 19:54:08.920856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:18.138 [2024-11-26 19:54:08.921006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.138 "name": "raid_bdev1", 00:13:18.138 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:18.138 "strip_size_kb": 64, 00:13:18.138 "state": "online", 00:13:18.138 "raid_level": "raid5f", 00:13:18.138 "superblock": true, 00:13:18.138 "num_base_bdevs": 4, 00:13:18.138 "num_base_bdevs_discovered": 4, 00:13:18.138 "num_base_bdevs_operational": 4, 00:13:18.138 "base_bdevs_list": [ 00:13:18.138 { 00:13:18.138 "name": "spare", 00:13:18.138 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:18.138 "is_configured": true, 00:13:18.138 "data_offset": 2048, 00:13:18.138 "data_size": 63488 00:13:18.138 }, 00:13:18.138 { 00:13:18.138 "name": "BaseBdev2", 00:13:18.138 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:18.138 "is_configured": true, 00:13:18.138 "data_offset": 2048, 00:13:18.138 "data_size": 63488 00:13:18.138 }, 00:13:18.138 { 00:13:18.138 "name": "BaseBdev3", 00:13:18.138 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:18.138 "is_configured": true, 00:13:18.138 "data_offset": 2048, 00:13:18.138 "data_size": 63488 00:13:18.138 }, 00:13:18.138 { 00:13:18.138 "name": "BaseBdev4", 00:13:18.138 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:18.138 "is_configured": true, 00:13:18.138 "data_offset": 2048, 00:13:18.138 "data_size": 63488 00:13:18.138 } 00:13:18.138 ] 00:13:18.138 }' 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.138 19:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.396 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.396 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.397 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.397 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.397 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.397 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.397 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.397 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.397 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.397 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.397 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.397 "name": "raid_bdev1", 00:13:18.397 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:18.397 "strip_size_kb": 64, 00:13:18.397 "state": "online", 00:13:18.397 "raid_level": "raid5f", 00:13:18.397 "superblock": true, 00:13:18.397 "num_base_bdevs": 4, 00:13:18.397 "num_base_bdevs_discovered": 4, 00:13:18.397 "num_base_bdevs_operational": 4, 00:13:18.397 "base_bdevs_list": [ 00:13:18.397 { 00:13:18.397 "name": "spare", 00:13:18.397 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:18.397 "is_configured": true, 00:13:18.397 "data_offset": 2048, 00:13:18.397 "data_size": 63488 00:13:18.397 }, 00:13:18.397 { 00:13:18.397 "name": "BaseBdev2", 00:13:18.397 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:18.397 "is_configured": true, 00:13:18.397 "data_offset": 2048, 00:13:18.397 "data_size": 63488 00:13:18.397 }, 00:13:18.397 { 00:13:18.397 "name": "BaseBdev3", 00:13:18.397 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:18.397 "is_configured": true, 00:13:18.397 "data_offset": 2048, 00:13:18.397 "data_size": 63488 00:13:18.397 }, 00:13:18.397 { 00:13:18.397 "name": "BaseBdev4", 00:13:18.397 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:18.397 "is_configured": true, 00:13:18.397 "data_offset": 2048, 00:13:18.397 "data_size": 63488 00:13:18.397 } 00:13:18.397 ] 00:13:18.397 }' 00:13:18.397 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.397 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.397 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.654 [2024-11-26 19:54:09.389668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.654 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.654 "name": "raid_bdev1", 00:13:18.654 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:18.654 "strip_size_kb": 64, 00:13:18.655 "state": "online", 00:13:18.655 "raid_level": "raid5f", 00:13:18.655 "superblock": true, 00:13:18.655 "num_base_bdevs": 4, 00:13:18.655 "num_base_bdevs_discovered": 3, 00:13:18.655 "num_base_bdevs_operational": 3, 00:13:18.655 "base_bdevs_list": [ 00:13:18.655 { 00:13:18.655 "name": null, 00:13:18.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.655 "is_configured": false, 00:13:18.655 "data_offset": 0, 00:13:18.655 "data_size": 63488 00:13:18.655 }, 00:13:18.655 { 00:13:18.655 "name": "BaseBdev2", 00:13:18.655 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:18.655 "is_configured": true, 00:13:18.655 "data_offset": 2048, 00:13:18.655 "data_size": 63488 00:13:18.655 }, 00:13:18.655 { 00:13:18.655 "name": "BaseBdev3", 00:13:18.655 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:18.655 "is_configured": true, 00:13:18.655 "data_offset": 2048, 00:13:18.655 "data_size": 63488 00:13:18.655 }, 00:13:18.655 { 00:13:18.655 "name": "BaseBdev4", 00:13:18.655 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:18.655 "is_configured": true, 00:13:18.655 "data_offset": 2048, 00:13:18.655 "data_size": 63488 00:13:18.655 } 00:13:18.655 ] 00:13:18.655 }' 00:13:18.655 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.655 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.913 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.913 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.913 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.913 [2024-11-26 19:54:09.721750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.913 [2024-11-26 19:54:09.721922] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:18.913 [2024-11-26 19:54:09.721938] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:18.913 [2024-11-26 19:54:09.721968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.913 [2024-11-26 19:54:09.729456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:13:18.913 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.913 19:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:18.913 [2024-11-26 19:54:09.734724] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:19.848 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.849 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.849 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.849 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.849 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.849 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.849 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.849 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.849 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.849 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.849 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.849 "name": "raid_bdev1", 00:13:19.849 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:19.849 "strip_size_kb": 64, 00:13:19.849 "state": "online", 00:13:19.849 "raid_level": "raid5f", 00:13:19.849 "superblock": true, 00:13:19.849 "num_base_bdevs": 4, 00:13:19.849 "num_base_bdevs_discovered": 4, 00:13:19.849 "num_base_bdevs_operational": 4, 00:13:19.849 "process": { 00:13:19.849 "type": "rebuild", 00:13:19.849 "target": "spare", 00:13:19.849 "progress": { 00:13:19.849 "blocks": 19200, 00:13:19.849 "percent": 10 00:13:19.849 } 00:13:19.849 }, 00:13:19.849 "base_bdevs_list": [ 00:13:19.849 { 00:13:19.849 "name": "spare", 00:13:19.849 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:19.849 "is_configured": true, 00:13:19.849 "data_offset": 2048, 00:13:19.849 "data_size": 63488 00:13:19.849 }, 00:13:19.849 { 00:13:19.849 "name": "BaseBdev2", 00:13:19.849 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:19.849 "is_configured": true, 00:13:19.849 "data_offset": 2048, 00:13:19.849 "data_size": 63488 00:13:19.849 }, 00:13:19.849 { 00:13:19.849 "name": "BaseBdev3", 00:13:19.849 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:19.849 "is_configured": true, 00:13:19.849 "data_offset": 2048, 00:13:19.849 "data_size": 63488 00:13:19.849 }, 00:13:19.849 { 00:13:19.849 "name": "BaseBdev4", 00:13:19.849 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:19.849 "is_configured": true, 00:13:19.849 "data_offset": 2048, 00:13:19.849 "data_size": 63488 00:13:19.849 } 00:13:19.849 ] 00:13:19.849 }' 00:13:19.849 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.107 [2024-11-26 19:54:10.839570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.107 [2024-11-26 19:54:10.841902] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:20.107 [2024-11-26 19:54:10.841958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.107 [2024-11-26 19:54:10.841973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.107 [2024-11-26 19:54:10.841981] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.107 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.108 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.108 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.108 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.108 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.108 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.108 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.108 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.108 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.108 "name": "raid_bdev1", 00:13:20.108 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:20.108 "strip_size_kb": 64, 00:13:20.108 "state": "online", 00:13:20.108 "raid_level": "raid5f", 00:13:20.108 "superblock": true, 00:13:20.108 "num_base_bdevs": 4, 00:13:20.108 "num_base_bdevs_discovered": 3, 00:13:20.108 "num_base_bdevs_operational": 3, 00:13:20.108 "base_bdevs_list": [ 00:13:20.108 { 00:13:20.108 "name": null, 00:13:20.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.108 "is_configured": false, 00:13:20.108 "data_offset": 0, 00:13:20.108 "data_size": 63488 00:13:20.108 }, 00:13:20.108 { 00:13:20.108 "name": "BaseBdev2", 00:13:20.108 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:20.108 "is_configured": true, 00:13:20.108 "data_offset": 2048, 00:13:20.108 "data_size": 63488 00:13:20.108 }, 00:13:20.108 { 00:13:20.108 "name": "BaseBdev3", 00:13:20.108 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:20.108 "is_configured": true, 00:13:20.108 "data_offset": 2048, 00:13:20.108 "data_size": 63488 00:13:20.108 }, 00:13:20.108 { 00:13:20.108 "name": "BaseBdev4", 00:13:20.108 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:20.108 "is_configured": true, 00:13:20.108 "data_offset": 2048, 00:13:20.108 "data_size": 63488 00:13:20.108 } 00:13:20.108 ] 00:13:20.108 }' 00:13:20.108 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.108 19:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.366 19:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:20.366 19:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.366 19:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.366 [2024-11-26 19:54:11.219031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:20.366 [2024-11-26 19:54:11.219090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.366 [2024-11-26 19:54:11.219113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:20.366 [2024-11-26 19:54:11.219124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.366 [2024-11-26 19:54:11.219553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.366 [2024-11-26 19:54:11.219568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:20.366 [2024-11-26 19:54:11.219642] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:20.366 [2024-11-26 19:54:11.219655] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:20.366 [2024-11-26 19:54:11.219664] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:20.366 [2024-11-26 19:54:11.219684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.366 [2024-11-26 19:54:11.227413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:13:20.366 spare 00:13:20.366 19:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.366 19:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:20.366 [2024-11-26 19:54:11.232536] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:21.301 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.301 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.301 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.301 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.301 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.560 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.560 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.560 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.560 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.560 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.560 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.560 "name": "raid_bdev1", 00:13:21.560 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:21.560 "strip_size_kb": 64, 00:13:21.560 "state": "online", 00:13:21.560 "raid_level": "raid5f", 00:13:21.560 "superblock": true, 00:13:21.561 "num_base_bdevs": 4, 00:13:21.561 "num_base_bdevs_discovered": 4, 00:13:21.561 "num_base_bdevs_operational": 4, 00:13:21.561 "process": { 00:13:21.561 "type": "rebuild", 00:13:21.561 "target": "spare", 00:13:21.561 "progress": { 00:13:21.561 "blocks": 19200, 00:13:21.561 "percent": 10 00:13:21.561 } 00:13:21.561 }, 00:13:21.561 "base_bdevs_list": [ 00:13:21.561 { 00:13:21.561 "name": "spare", 00:13:21.561 "uuid": "a29002bb-b8c5-58cb-a926-fbfe05167775", 00:13:21.561 "is_configured": true, 00:13:21.561 "data_offset": 2048, 00:13:21.561 "data_size": 63488 00:13:21.561 }, 00:13:21.561 { 00:13:21.561 "name": "BaseBdev2", 00:13:21.561 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:21.561 "is_configured": true, 00:13:21.561 "data_offset": 2048, 00:13:21.561 "data_size": 63488 00:13:21.561 }, 00:13:21.561 { 00:13:21.561 "name": "BaseBdev3", 00:13:21.561 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:21.561 "is_configured": true, 00:13:21.561 "data_offset": 2048, 00:13:21.561 "data_size": 63488 00:13:21.561 }, 00:13:21.561 { 00:13:21.561 "name": "BaseBdev4", 00:13:21.561 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:21.561 "is_configured": true, 00:13:21.561 "data_offset": 2048, 00:13:21.561 "data_size": 63488 00:13:21.561 } 00:13:21.561 ] 00:13:21.561 }' 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.561 [2024-11-26 19:54:12.337512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.561 [2024-11-26 19:54:12.340197] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:21.561 [2024-11-26 19:54:12.340247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.561 [2024-11-26 19:54:12.340264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.561 [2024-11-26 19:54:12.340270] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.561 "name": "raid_bdev1", 00:13:21.561 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:21.561 "strip_size_kb": 64, 00:13:21.561 "state": "online", 00:13:21.561 "raid_level": "raid5f", 00:13:21.561 "superblock": true, 00:13:21.561 "num_base_bdevs": 4, 00:13:21.561 "num_base_bdevs_discovered": 3, 00:13:21.561 "num_base_bdevs_operational": 3, 00:13:21.561 "base_bdevs_list": [ 00:13:21.561 { 00:13:21.561 "name": null, 00:13:21.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.561 "is_configured": false, 00:13:21.561 "data_offset": 0, 00:13:21.561 "data_size": 63488 00:13:21.561 }, 00:13:21.561 { 00:13:21.561 "name": "BaseBdev2", 00:13:21.561 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:21.561 "is_configured": true, 00:13:21.561 "data_offset": 2048, 00:13:21.561 "data_size": 63488 00:13:21.561 }, 00:13:21.561 { 00:13:21.561 "name": "BaseBdev3", 00:13:21.561 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:21.561 "is_configured": true, 00:13:21.561 "data_offset": 2048, 00:13:21.561 "data_size": 63488 00:13:21.561 }, 00:13:21.561 { 00:13:21.561 "name": "BaseBdev4", 00:13:21.561 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:21.561 "is_configured": true, 00:13:21.561 "data_offset": 2048, 00:13:21.561 "data_size": 63488 00:13:21.561 } 00:13:21.561 ] 00:13:21.561 }' 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.561 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.821 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:21.821 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.821 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:21.821 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:21.821 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.821 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.821 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.821 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.821 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.821 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.821 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.821 "name": "raid_bdev1", 00:13:21.821 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:21.821 "strip_size_kb": 64, 00:13:21.821 "state": "online", 00:13:21.821 "raid_level": "raid5f", 00:13:21.821 "superblock": true, 00:13:21.821 "num_base_bdevs": 4, 00:13:21.821 "num_base_bdevs_discovered": 3, 00:13:21.821 "num_base_bdevs_operational": 3, 00:13:21.821 "base_bdevs_list": [ 00:13:21.821 { 00:13:21.821 "name": null, 00:13:21.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.821 "is_configured": false, 00:13:21.821 "data_offset": 0, 00:13:21.821 "data_size": 63488 00:13:21.821 }, 00:13:21.821 { 00:13:21.821 "name": "BaseBdev2", 00:13:21.821 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:21.821 "is_configured": true, 00:13:21.821 "data_offset": 2048, 00:13:21.821 "data_size": 63488 00:13:21.821 }, 00:13:21.821 { 00:13:21.821 "name": "BaseBdev3", 00:13:21.821 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:21.821 "is_configured": true, 00:13:21.821 "data_offset": 2048, 00:13:21.821 "data_size": 63488 00:13:21.821 }, 00:13:21.821 { 00:13:21.821 "name": "BaseBdev4", 00:13:21.821 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:21.821 "is_configured": true, 00:13:21.821 "data_offset": 2048, 00:13:21.821 "data_size": 63488 00:13:21.821 } 00:13:21.821 ] 00:13:21.821 }' 00:13:21.821 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.821 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:21.821 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.091 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.091 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:22.091 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.091 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.091 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.091 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:22.091 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.091 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.091 [2024-11-26 19:54:12.781415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:22.091 [2024-11-26 19:54:12.781482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.091 [2024-11-26 19:54:12.781510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:22.091 [2024-11-26 19:54:12.781519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.091 [2024-11-26 19:54:12.782058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.091 [2024-11-26 19:54:12.782092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:22.091 [2024-11-26 19:54:12.782187] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:22.091 [2024-11-26 19:54:12.782205] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:22.091 [2024-11-26 19:54:12.782215] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:22.091 [2024-11-26 19:54:12.782225] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:22.091 BaseBdev1 00:13:22.091 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.091 19:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:23.025 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:23.025 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.025 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.025 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.025 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.026 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.026 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.026 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.026 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.026 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.026 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.026 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.026 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.026 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.026 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.026 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.026 "name": "raid_bdev1", 00:13:23.026 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:23.026 "strip_size_kb": 64, 00:13:23.026 "state": "online", 00:13:23.026 "raid_level": "raid5f", 00:13:23.026 "superblock": true, 00:13:23.026 "num_base_bdevs": 4, 00:13:23.026 "num_base_bdevs_discovered": 3, 00:13:23.026 "num_base_bdevs_operational": 3, 00:13:23.026 "base_bdevs_list": [ 00:13:23.026 { 00:13:23.026 "name": null, 00:13:23.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.026 "is_configured": false, 00:13:23.026 "data_offset": 0, 00:13:23.026 "data_size": 63488 00:13:23.026 }, 00:13:23.026 { 00:13:23.026 "name": "BaseBdev2", 00:13:23.026 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:23.026 "is_configured": true, 00:13:23.026 "data_offset": 2048, 00:13:23.026 "data_size": 63488 00:13:23.026 }, 00:13:23.026 { 00:13:23.026 "name": "BaseBdev3", 00:13:23.026 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:23.026 "is_configured": true, 00:13:23.026 "data_offset": 2048, 00:13:23.026 "data_size": 63488 00:13:23.026 }, 00:13:23.026 { 00:13:23.026 "name": "BaseBdev4", 00:13:23.026 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:23.026 "is_configured": true, 00:13:23.026 "data_offset": 2048, 00:13:23.026 "data_size": 63488 00:13:23.026 } 00:13:23.026 ] 00:13:23.026 }' 00:13:23.026 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.026 19:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.284 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.284 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.284 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.284 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.284 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.284 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.284 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.284 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.284 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.284 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.284 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.284 "name": "raid_bdev1", 00:13:23.284 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:23.284 "strip_size_kb": 64, 00:13:23.284 "state": "online", 00:13:23.284 "raid_level": "raid5f", 00:13:23.284 "superblock": true, 00:13:23.284 "num_base_bdevs": 4, 00:13:23.284 "num_base_bdevs_discovered": 3, 00:13:23.284 "num_base_bdevs_operational": 3, 00:13:23.284 "base_bdevs_list": [ 00:13:23.284 { 00:13:23.284 "name": null, 00:13:23.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.284 "is_configured": false, 00:13:23.284 "data_offset": 0, 00:13:23.284 "data_size": 63488 00:13:23.284 }, 00:13:23.284 { 00:13:23.284 "name": "BaseBdev2", 00:13:23.284 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:23.284 "is_configured": true, 00:13:23.284 "data_offset": 2048, 00:13:23.284 "data_size": 63488 00:13:23.284 }, 00:13:23.284 { 00:13:23.284 "name": "BaseBdev3", 00:13:23.284 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:23.284 "is_configured": true, 00:13:23.284 "data_offset": 2048, 00:13:23.284 "data_size": 63488 00:13:23.284 }, 00:13:23.284 { 00:13:23.284 "name": "BaseBdev4", 00:13:23.284 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:23.284 "is_configured": true, 00:13:23.284 "data_offset": 2048, 00:13:23.284 "data_size": 63488 00:13:23.284 } 00:13:23.284 ] 00:13:23.284 }' 00:13:23.284 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.284 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.284 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.544 [2024-11-26 19:54:14.237725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.544 [2024-11-26 19:54:14.237894] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:23.544 [2024-11-26 19:54:14.237907] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:23.544 request: 00:13:23.544 { 00:13:23.544 "base_bdev": "BaseBdev1", 00:13:23.544 "raid_bdev": "raid_bdev1", 00:13:23.544 "method": "bdev_raid_add_base_bdev", 00:13:23.544 "req_id": 1 00:13:23.544 } 00:13:23.544 Got JSON-RPC error response 00:13:23.544 response: 00:13:23.544 { 00:13:23.544 "code": -22, 00:13:23.544 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:23.544 } 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:23.544 19:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.481 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.482 "name": "raid_bdev1", 00:13:24.482 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:24.482 "strip_size_kb": 64, 00:13:24.482 "state": "online", 00:13:24.482 "raid_level": "raid5f", 00:13:24.482 "superblock": true, 00:13:24.482 "num_base_bdevs": 4, 00:13:24.482 "num_base_bdevs_discovered": 3, 00:13:24.482 "num_base_bdevs_operational": 3, 00:13:24.482 "base_bdevs_list": [ 00:13:24.482 { 00:13:24.482 "name": null, 00:13:24.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.482 "is_configured": false, 00:13:24.482 "data_offset": 0, 00:13:24.482 "data_size": 63488 00:13:24.482 }, 00:13:24.482 { 00:13:24.482 "name": "BaseBdev2", 00:13:24.482 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:24.482 "is_configured": true, 00:13:24.482 "data_offset": 2048, 00:13:24.482 "data_size": 63488 00:13:24.482 }, 00:13:24.482 { 00:13:24.482 "name": "BaseBdev3", 00:13:24.482 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:24.482 "is_configured": true, 00:13:24.482 "data_offset": 2048, 00:13:24.482 "data_size": 63488 00:13:24.482 }, 00:13:24.482 { 00:13:24.482 "name": "BaseBdev4", 00:13:24.482 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:24.482 "is_configured": true, 00:13:24.482 "data_offset": 2048, 00:13:24.482 "data_size": 63488 00:13:24.482 } 00:13:24.482 ] 00:13:24.482 }' 00:13:24.482 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.482 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.741 "name": "raid_bdev1", 00:13:24.741 "uuid": "44661987-bb41-43d0-92d2-ac8601dbf76a", 00:13:24.741 "strip_size_kb": 64, 00:13:24.741 "state": "online", 00:13:24.741 "raid_level": "raid5f", 00:13:24.741 "superblock": true, 00:13:24.741 "num_base_bdevs": 4, 00:13:24.741 "num_base_bdevs_discovered": 3, 00:13:24.741 "num_base_bdevs_operational": 3, 00:13:24.741 "base_bdevs_list": [ 00:13:24.741 { 00:13:24.741 "name": null, 00:13:24.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.741 "is_configured": false, 00:13:24.741 "data_offset": 0, 00:13:24.741 "data_size": 63488 00:13:24.741 }, 00:13:24.741 { 00:13:24.741 "name": "BaseBdev2", 00:13:24.741 "uuid": "f8c76858-a88b-5578-aba7-b19ebe977e3c", 00:13:24.741 "is_configured": true, 00:13:24.741 "data_offset": 2048, 00:13:24.741 "data_size": 63488 00:13:24.741 }, 00:13:24.741 { 00:13:24.741 "name": "BaseBdev3", 00:13:24.741 "uuid": "ac5b4600-fab8-51e5-bc8a-7ded034d7785", 00:13:24.741 "is_configured": true, 00:13:24.741 "data_offset": 2048, 00:13:24.741 "data_size": 63488 00:13:24.741 }, 00:13:24.741 { 00:13:24.741 "name": "BaseBdev4", 00:13:24.741 "uuid": "e520354a-6e5c-5672-acf8-b3a00a326e6d", 00:13:24.741 "is_configured": true, 00:13:24.741 "data_offset": 2048, 00:13:24.741 "data_size": 63488 00:13:24.741 } 00:13:24.741 ] 00:13:24.741 }' 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82631 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82631 ']' 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82631 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.741 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82631 00:13:24.999 killing process with pid 82631 00:13:24.999 Received shutdown signal, test time was about 60.000000 seconds 00:13:24.999 00:13:24.999 Latency(us) 00:13:24.999 [2024-11-26T19:54:15.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.999 [2024-11-26T19:54:15.936Z] =================================================================================================================== 00:13:24.999 [2024-11-26T19:54:15.936Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:24.999 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.999 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.999 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82631' 00:13:24.999 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82631 00:13:24.999 [2024-11-26 19:54:15.685676] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:24.999 19:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82631 00:13:24.999 [2024-11-26 19:54:15.685791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.999 [2024-11-26 19:54:15.685865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.999 [2024-11-26 19:54:15.685875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:25.258 [2024-11-26 19:54:15.939287] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:25.824 ************************************ 00:13:25.824 END TEST raid5f_rebuild_test_sb 00:13:25.824 ************************************ 00:13:25.824 19:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:25.824 00:13:25.824 real 0m24.825s 00:13:25.824 user 0m30.143s 00:13:25.824 sys 0m2.262s 00:13:25.824 19:54:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.824 19:54:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.824 19:54:16 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:13:25.824 19:54:16 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:13:25.824 19:54:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:25.824 19:54:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.824 19:54:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:25.824 ************************************ 00:13:25.824 START TEST raid_state_function_test_sb_4k 00:13:25.824 ************************************ 00:13:25.824 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:13:25.824 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:25.824 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:25.824 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:25.824 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:25.825 Process raid pid: 83428 00:13:25.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=83428 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83428' 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 83428 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 83428 ']' 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.825 19:54:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:25.825 [2024-11-26 19:54:16.662688] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:13:25.825 [2024-11-26 19:54:16.663075] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.083 [2024-11-26 19:54:16.834917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.083 [2024-11-26 19:54:16.954704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.340 [2024-11-26 19:54:17.103003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.340 [2024-11-26 19:54:17.103045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:26.598 [2024-11-26 19:54:17.515222] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.598 [2024-11-26 19:54:17.515275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.598 [2024-11-26 19:54:17.515290] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.598 [2024-11-26 19:54:17.515301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:26.598 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.855 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.855 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.855 "name": "Existed_Raid", 00:13:26.855 "uuid": "eaa18f0e-e630-4ce1-bbbd-9665be63cf5b", 00:13:26.855 "strip_size_kb": 0, 00:13:26.855 "state": "configuring", 00:13:26.855 "raid_level": "raid1", 00:13:26.855 "superblock": true, 00:13:26.855 "num_base_bdevs": 2, 00:13:26.855 "num_base_bdevs_discovered": 0, 00:13:26.855 "num_base_bdevs_operational": 2, 00:13:26.855 "base_bdevs_list": [ 00:13:26.855 { 00:13:26.855 "name": "BaseBdev1", 00:13:26.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.855 "is_configured": false, 00:13:26.855 "data_offset": 0, 00:13:26.855 "data_size": 0 00:13:26.855 }, 00:13:26.855 { 00:13:26.855 "name": "BaseBdev2", 00:13:26.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.855 "is_configured": false, 00:13:26.855 "data_offset": 0, 00:13:26.855 "data_size": 0 00:13:26.855 } 00:13:26.855 ] 00:13:26.855 }' 00:13:26.855 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.855 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.156 [2024-11-26 19:54:17.811228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:27.156 [2024-11-26 19:54:17.811258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.156 [2024-11-26 19:54:17.819222] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:27.156 [2024-11-26 19:54:17.819259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:27.156 [2024-11-26 19:54:17.819268] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:27.156 [2024-11-26 19:54:17.819280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.156 [2024-11-26 19:54:17.853830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.156 BaseBdev1 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.156 [ 00:13:27.156 { 00:13:27.156 "name": "BaseBdev1", 00:13:27.156 "aliases": [ 00:13:27.156 "f3779670-fda5-4cec-8c14-56b07eeb7a66" 00:13:27.156 ], 00:13:27.156 "product_name": "Malloc disk", 00:13:27.156 "block_size": 4096, 00:13:27.156 "num_blocks": 8192, 00:13:27.156 "uuid": "f3779670-fda5-4cec-8c14-56b07eeb7a66", 00:13:27.156 "assigned_rate_limits": { 00:13:27.156 "rw_ios_per_sec": 0, 00:13:27.156 "rw_mbytes_per_sec": 0, 00:13:27.156 "r_mbytes_per_sec": 0, 00:13:27.156 "w_mbytes_per_sec": 0 00:13:27.156 }, 00:13:27.156 "claimed": true, 00:13:27.156 "claim_type": "exclusive_write", 00:13:27.156 "zoned": false, 00:13:27.156 "supported_io_types": { 00:13:27.156 "read": true, 00:13:27.156 "write": true, 00:13:27.156 "unmap": true, 00:13:27.156 "flush": true, 00:13:27.156 "reset": true, 00:13:27.156 "nvme_admin": false, 00:13:27.156 "nvme_io": false, 00:13:27.156 "nvme_io_md": false, 00:13:27.156 "write_zeroes": true, 00:13:27.156 "zcopy": true, 00:13:27.156 "get_zone_info": false, 00:13:27.156 "zone_management": false, 00:13:27.156 "zone_append": false, 00:13:27.156 "compare": false, 00:13:27.156 "compare_and_write": false, 00:13:27.156 "abort": true, 00:13:27.156 "seek_hole": false, 00:13:27.156 "seek_data": false, 00:13:27.156 "copy": true, 00:13:27.156 "nvme_iov_md": false 00:13:27.156 }, 00:13:27.156 "memory_domains": [ 00:13:27.156 { 00:13:27.156 "dma_device_id": "system", 00:13:27.156 "dma_device_type": 1 00:13:27.156 }, 00:13:27.156 { 00:13:27.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.156 "dma_device_type": 2 00:13:27.156 } 00:13:27.156 ], 00:13:27.156 "driver_specific": {} 00:13:27.156 } 00:13:27.156 ] 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.156 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.157 "name": "Existed_Raid", 00:13:27.157 "uuid": "46a5e61b-4897-42d5-bf93-fe313c0a528f", 00:13:27.157 "strip_size_kb": 0, 00:13:27.157 "state": "configuring", 00:13:27.157 "raid_level": "raid1", 00:13:27.157 "superblock": true, 00:13:27.157 "num_base_bdevs": 2, 00:13:27.157 "num_base_bdevs_discovered": 1, 00:13:27.157 "num_base_bdevs_operational": 2, 00:13:27.157 "base_bdevs_list": [ 00:13:27.157 { 00:13:27.157 "name": "BaseBdev1", 00:13:27.157 "uuid": "f3779670-fda5-4cec-8c14-56b07eeb7a66", 00:13:27.157 "is_configured": true, 00:13:27.157 "data_offset": 256, 00:13:27.157 "data_size": 7936 00:13:27.157 }, 00:13:27.157 { 00:13:27.157 "name": "BaseBdev2", 00:13:27.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.157 "is_configured": false, 00:13:27.157 "data_offset": 0, 00:13:27.157 "data_size": 0 00:13:27.157 } 00:13:27.157 ] 00:13:27.157 }' 00:13:27.157 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.157 19:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.413 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:27.413 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.413 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.413 [2024-11-26 19:54:18.205977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:27.413 [2024-11-26 19:54:18.206028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:27.413 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.413 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:27.413 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.413 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.413 [2024-11-26 19:54:18.214016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.413 [2024-11-26 19:54:18.215987] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:27.413 [2024-11-26 19:54:18.216031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:27.413 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.413 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:27.413 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.414 "name": "Existed_Raid", 00:13:27.414 "uuid": "cff0fba5-01ba-45b0-874b-1deddf1fe5a0", 00:13:27.414 "strip_size_kb": 0, 00:13:27.414 "state": "configuring", 00:13:27.414 "raid_level": "raid1", 00:13:27.414 "superblock": true, 00:13:27.414 "num_base_bdevs": 2, 00:13:27.414 "num_base_bdevs_discovered": 1, 00:13:27.414 "num_base_bdevs_operational": 2, 00:13:27.414 "base_bdevs_list": [ 00:13:27.414 { 00:13:27.414 "name": "BaseBdev1", 00:13:27.414 "uuid": "f3779670-fda5-4cec-8c14-56b07eeb7a66", 00:13:27.414 "is_configured": true, 00:13:27.414 "data_offset": 256, 00:13:27.414 "data_size": 7936 00:13:27.414 }, 00:13:27.414 { 00:13:27.414 "name": "BaseBdev2", 00:13:27.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.414 "is_configured": false, 00:13:27.414 "data_offset": 0, 00:13:27.414 "data_size": 0 00:13:27.414 } 00:13:27.414 ] 00:13:27.414 }' 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.414 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.671 [2024-11-26 19:54:18.550255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.671 [2024-11-26 19:54:18.550552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:27.671 [2024-11-26 19:54:18.550564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:13:27.671 [2024-11-26 19:54:18.550789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:27.671 [2024-11-26 19:54:18.550920] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:27.671 [2024-11-26 19:54:18.550931] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:27.671 BaseBdev2 00:13:27.671 [2024-11-26 19:54:18.551063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.671 [ 00:13:27.671 { 00:13:27.671 "name": "BaseBdev2", 00:13:27.671 "aliases": [ 00:13:27.671 "71b36eb0-369f-4f4a-b28a-1d54b054e218" 00:13:27.671 ], 00:13:27.671 "product_name": "Malloc disk", 00:13:27.671 "block_size": 4096, 00:13:27.671 "num_blocks": 8192, 00:13:27.671 "uuid": "71b36eb0-369f-4f4a-b28a-1d54b054e218", 00:13:27.671 "assigned_rate_limits": { 00:13:27.671 "rw_ios_per_sec": 0, 00:13:27.671 "rw_mbytes_per_sec": 0, 00:13:27.671 "r_mbytes_per_sec": 0, 00:13:27.671 "w_mbytes_per_sec": 0 00:13:27.671 }, 00:13:27.671 "claimed": true, 00:13:27.671 "claim_type": "exclusive_write", 00:13:27.671 "zoned": false, 00:13:27.671 "supported_io_types": { 00:13:27.671 "read": true, 00:13:27.671 "write": true, 00:13:27.671 "unmap": true, 00:13:27.671 "flush": true, 00:13:27.671 "reset": true, 00:13:27.671 "nvme_admin": false, 00:13:27.671 "nvme_io": false, 00:13:27.671 "nvme_io_md": false, 00:13:27.671 "write_zeroes": true, 00:13:27.671 "zcopy": true, 00:13:27.671 "get_zone_info": false, 00:13:27.671 "zone_management": false, 00:13:27.671 "zone_append": false, 00:13:27.671 "compare": false, 00:13:27.671 "compare_and_write": false, 00:13:27.671 "abort": true, 00:13:27.671 "seek_hole": false, 00:13:27.671 "seek_data": false, 00:13:27.671 "copy": true, 00:13:27.671 "nvme_iov_md": false 00:13:27.671 }, 00:13:27.671 "memory_domains": [ 00:13:27.671 { 00:13:27.671 "dma_device_id": "system", 00:13:27.671 "dma_device_type": 1 00:13:27.671 }, 00:13:27.671 { 00:13:27.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.671 "dma_device_type": 2 00:13:27.671 } 00:13:27.671 ], 00:13:27.671 "driver_specific": {} 00:13:27.671 } 00:13:27.671 ] 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:27.671 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.928 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.928 "name": "Existed_Raid", 00:13:27.928 "uuid": "cff0fba5-01ba-45b0-874b-1deddf1fe5a0", 00:13:27.928 "strip_size_kb": 0, 00:13:27.928 "state": "online", 00:13:27.928 "raid_level": "raid1", 00:13:27.928 "superblock": true, 00:13:27.928 "num_base_bdevs": 2, 00:13:27.928 "num_base_bdevs_discovered": 2, 00:13:27.928 "num_base_bdevs_operational": 2, 00:13:27.928 "base_bdevs_list": [ 00:13:27.928 { 00:13:27.928 "name": "BaseBdev1", 00:13:27.928 "uuid": "f3779670-fda5-4cec-8c14-56b07eeb7a66", 00:13:27.928 "is_configured": true, 00:13:27.928 "data_offset": 256, 00:13:27.928 "data_size": 7936 00:13:27.928 }, 00:13:27.928 { 00:13:27.928 "name": "BaseBdev2", 00:13:27.928 "uuid": "71b36eb0-369f-4f4a-b28a-1d54b054e218", 00:13:27.928 "is_configured": true, 00:13:27.928 "data_offset": 256, 00:13:27.928 "data_size": 7936 00:13:27.928 } 00:13:27.928 ] 00:13:27.928 }' 00:13:27.928 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.928 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:28.232 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:28.232 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:28.232 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:28.232 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:28.232 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:13:28.232 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:28.232 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:28.232 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.232 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:28.232 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:28.232 [2024-11-26 19:54:18.906628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.232 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.232 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:28.232 "name": "Existed_Raid", 00:13:28.232 "aliases": [ 00:13:28.232 "cff0fba5-01ba-45b0-874b-1deddf1fe5a0" 00:13:28.232 ], 00:13:28.232 "product_name": "Raid Volume", 00:13:28.233 "block_size": 4096, 00:13:28.233 "num_blocks": 7936, 00:13:28.233 "uuid": "cff0fba5-01ba-45b0-874b-1deddf1fe5a0", 00:13:28.233 "assigned_rate_limits": { 00:13:28.233 "rw_ios_per_sec": 0, 00:13:28.233 "rw_mbytes_per_sec": 0, 00:13:28.233 "r_mbytes_per_sec": 0, 00:13:28.233 "w_mbytes_per_sec": 0 00:13:28.233 }, 00:13:28.233 "claimed": false, 00:13:28.233 "zoned": false, 00:13:28.233 "supported_io_types": { 00:13:28.233 "read": true, 00:13:28.233 "write": true, 00:13:28.233 "unmap": false, 00:13:28.233 "flush": false, 00:13:28.233 "reset": true, 00:13:28.233 "nvme_admin": false, 00:13:28.233 "nvme_io": false, 00:13:28.233 "nvme_io_md": false, 00:13:28.233 "write_zeroes": true, 00:13:28.233 "zcopy": false, 00:13:28.233 "get_zone_info": false, 00:13:28.233 "zone_management": false, 00:13:28.233 "zone_append": false, 00:13:28.233 "compare": false, 00:13:28.233 "compare_and_write": false, 00:13:28.233 "abort": false, 00:13:28.233 "seek_hole": false, 00:13:28.233 "seek_data": false, 00:13:28.233 "copy": false, 00:13:28.233 "nvme_iov_md": false 00:13:28.233 }, 00:13:28.233 "memory_domains": [ 00:13:28.233 { 00:13:28.233 "dma_device_id": "system", 00:13:28.233 "dma_device_type": 1 00:13:28.233 }, 00:13:28.233 { 00:13:28.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.233 "dma_device_type": 2 00:13:28.233 }, 00:13:28.233 { 00:13:28.233 "dma_device_id": "system", 00:13:28.233 "dma_device_type": 1 00:13:28.233 }, 00:13:28.233 { 00:13:28.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.233 "dma_device_type": 2 00:13:28.233 } 00:13:28.233 ], 00:13:28.233 "driver_specific": { 00:13:28.233 "raid": { 00:13:28.233 "uuid": "cff0fba5-01ba-45b0-874b-1deddf1fe5a0", 00:13:28.233 "strip_size_kb": 0, 00:13:28.233 "state": "online", 00:13:28.233 "raid_level": "raid1", 00:13:28.233 "superblock": true, 00:13:28.233 "num_base_bdevs": 2, 00:13:28.233 "num_base_bdevs_discovered": 2, 00:13:28.233 "num_base_bdevs_operational": 2, 00:13:28.233 "base_bdevs_list": [ 00:13:28.233 { 00:13:28.233 "name": "BaseBdev1", 00:13:28.233 "uuid": "f3779670-fda5-4cec-8c14-56b07eeb7a66", 00:13:28.233 "is_configured": true, 00:13:28.233 "data_offset": 256, 00:13:28.233 "data_size": 7936 00:13:28.233 }, 00:13:28.233 { 00:13:28.233 "name": "BaseBdev2", 00:13:28.233 "uuid": "71b36eb0-369f-4f4a-b28a-1d54b054e218", 00:13:28.233 "is_configured": true, 00:13:28.233 "data_offset": 256, 00:13:28.233 "data_size": 7936 00:13:28.233 } 00:13:28.233 ] 00:13:28.233 } 00:13:28.233 } 00:13:28.233 }' 00:13:28.233 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:28.233 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:28.233 BaseBdev2' 00:13:28.233 19:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:28.233 [2024-11-26 19:54:19.078457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.233 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.234 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:28.234 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.234 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.234 "name": "Existed_Raid", 00:13:28.234 "uuid": "cff0fba5-01ba-45b0-874b-1deddf1fe5a0", 00:13:28.234 "strip_size_kb": 0, 00:13:28.234 "state": "online", 00:13:28.234 "raid_level": "raid1", 00:13:28.234 "superblock": true, 00:13:28.234 "num_base_bdevs": 2, 00:13:28.234 "num_base_bdevs_discovered": 1, 00:13:28.234 "num_base_bdevs_operational": 1, 00:13:28.234 "base_bdevs_list": [ 00:13:28.234 { 00:13:28.234 "name": null, 00:13:28.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.234 "is_configured": false, 00:13:28.234 "data_offset": 0, 00:13:28.234 "data_size": 7936 00:13:28.234 }, 00:13:28.234 { 00:13:28.234 "name": "BaseBdev2", 00:13:28.234 "uuid": "71b36eb0-369f-4f4a-b28a-1d54b054e218", 00:13:28.234 "is_configured": true, 00:13:28.234 "data_offset": 256, 00:13:28.234 "data_size": 7936 00:13:28.234 } 00:13:28.234 ] 00:13:28.234 }' 00:13:28.234 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.234 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:28.490 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:28.490 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:28.490 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.490 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:28.490 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.490 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:28.748 [2024-11-26 19:54:19.452235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:28.748 [2024-11-26 19:54:19.452446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.748 [2024-11-26 19:54:19.502049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.748 [2024-11-26 19:54:19.502098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.748 [2024-11-26 19:54:19.502109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 83428 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 83428 ']' 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 83428 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83428 00:13:28.748 killing process with pid 83428 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:28.748 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83428' 00:13:28.749 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 83428 00:13:28.749 [2024-11-26 19:54:19.563561] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:28.749 19:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 83428 00:13:28.749 [2024-11-26 19:54:19.572443] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:29.315 ************************************ 00:13:29.315 END TEST raid_state_function_test_sb_4k 00:13:29.315 ************************************ 00:13:29.315 19:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:13:29.315 00:13:29.315 real 0m3.605s 00:13:29.315 user 0m5.206s 00:13:29.315 sys 0m0.609s 00:13:29.315 19:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.315 19:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:29.315 19:54:20 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:13:29.316 19:54:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:29.316 19:54:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.316 19:54:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:29.316 ************************************ 00:13:29.316 START TEST raid_superblock_test_4k 00:13:29.316 ************************************ 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=83669 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 83669 00:13:29.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 83669 ']' 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.316 19:54:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:29.573 [2024-11-26 19:54:20.309184] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:13:29.573 [2024-11-26 19:54:20.309326] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83669 ] 00:13:29.573 [2024-11-26 19:54:20.458237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.830 [2024-11-26 19:54:20.560484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.830 [2024-11-26 19:54:20.679665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.830 [2024-11-26 19:54:20.679722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.397 malloc1 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.397 [2024-11-26 19:54:21.146332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:30.397 [2024-11-26 19:54:21.146401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.397 [2024-11-26 19:54:21.146420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:30.397 [2024-11-26 19:54:21.146428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.397 [2024-11-26 19:54:21.148339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.397 [2024-11-26 19:54:21.148397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:30.397 pt1 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.397 malloc2 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.397 [2024-11-26 19:54:21.183783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:30.397 [2024-11-26 19:54:21.183825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.397 [2024-11-26 19:54:21.183845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:30.397 [2024-11-26 19:54:21.183853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.397 [2024-11-26 19:54:21.185669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.397 [2024-11-26 19:54:21.185793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:30.397 pt2 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.397 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.397 [2024-11-26 19:54:21.191831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:30.397 [2024-11-26 19:54:21.193458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:30.397 [2024-11-26 19:54:21.193597] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:30.398 [2024-11-26 19:54:21.193610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:13:30.398 [2024-11-26 19:54:21.193823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:30.398 [2024-11-26 19:54:21.193951] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:30.398 [2024-11-26 19:54:21.193963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:30.398 [2024-11-26 19:54:21.194086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.398 "name": "raid_bdev1", 00:13:30.398 "uuid": "5045bd9c-9ec8-46a4-8e5d-202641a7c2fc", 00:13:30.398 "strip_size_kb": 0, 00:13:30.398 "state": "online", 00:13:30.398 "raid_level": "raid1", 00:13:30.398 "superblock": true, 00:13:30.398 "num_base_bdevs": 2, 00:13:30.398 "num_base_bdevs_discovered": 2, 00:13:30.398 "num_base_bdevs_operational": 2, 00:13:30.398 "base_bdevs_list": [ 00:13:30.398 { 00:13:30.398 "name": "pt1", 00:13:30.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:30.398 "is_configured": true, 00:13:30.398 "data_offset": 256, 00:13:30.398 "data_size": 7936 00:13:30.398 }, 00:13:30.398 { 00:13:30.398 "name": "pt2", 00:13:30.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:30.398 "is_configured": true, 00:13:30.398 "data_offset": 256, 00:13:30.398 "data_size": 7936 00:13:30.398 } 00:13:30.398 ] 00:13:30.398 }' 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.398 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.656 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:30.656 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:30.656 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:30.656 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:30.656 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:13:30.656 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:30.656 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:30.656 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:30.656 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.656 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.656 [2024-11-26 19:54:21.508144] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.656 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.656 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:30.656 "name": "raid_bdev1", 00:13:30.656 "aliases": [ 00:13:30.656 "5045bd9c-9ec8-46a4-8e5d-202641a7c2fc" 00:13:30.656 ], 00:13:30.656 "product_name": "Raid Volume", 00:13:30.656 "block_size": 4096, 00:13:30.656 "num_blocks": 7936, 00:13:30.656 "uuid": "5045bd9c-9ec8-46a4-8e5d-202641a7c2fc", 00:13:30.656 "assigned_rate_limits": { 00:13:30.656 "rw_ios_per_sec": 0, 00:13:30.656 "rw_mbytes_per_sec": 0, 00:13:30.656 "r_mbytes_per_sec": 0, 00:13:30.656 "w_mbytes_per_sec": 0 00:13:30.656 }, 00:13:30.656 "claimed": false, 00:13:30.656 "zoned": false, 00:13:30.656 "supported_io_types": { 00:13:30.656 "read": true, 00:13:30.656 "write": true, 00:13:30.656 "unmap": false, 00:13:30.656 "flush": false, 00:13:30.656 "reset": true, 00:13:30.656 "nvme_admin": false, 00:13:30.656 "nvme_io": false, 00:13:30.656 "nvme_io_md": false, 00:13:30.656 "write_zeroes": true, 00:13:30.656 "zcopy": false, 00:13:30.656 "get_zone_info": false, 00:13:30.656 "zone_management": false, 00:13:30.656 "zone_append": false, 00:13:30.656 "compare": false, 00:13:30.656 "compare_and_write": false, 00:13:30.656 "abort": false, 00:13:30.656 "seek_hole": false, 00:13:30.656 "seek_data": false, 00:13:30.656 "copy": false, 00:13:30.656 "nvme_iov_md": false 00:13:30.656 }, 00:13:30.656 "memory_domains": [ 00:13:30.656 { 00:13:30.656 "dma_device_id": "system", 00:13:30.656 "dma_device_type": 1 00:13:30.656 }, 00:13:30.656 { 00:13:30.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.656 "dma_device_type": 2 00:13:30.656 }, 00:13:30.656 { 00:13:30.656 "dma_device_id": "system", 00:13:30.656 "dma_device_type": 1 00:13:30.656 }, 00:13:30.656 { 00:13:30.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.656 "dma_device_type": 2 00:13:30.656 } 00:13:30.656 ], 00:13:30.656 "driver_specific": { 00:13:30.656 "raid": { 00:13:30.656 "uuid": "5045bd9c-9ec8-46a4-8e5d-202641a7c2fc", 00:13:30.656 "strip_size_kb": 0, 00:13:30.656 "state": "online", 00:13:30.656 "raid_level": "raid1", 00:13:30.657 "superblock": true, 00:13:30.657 "num_base_bdevs": 2, 00:13:30.657 "num_base_bdevs_discovered": 2, 00:13:30.657 "num_base_bdevs_operational": 2, 00:13:30.657 "base_bdevs_list": [ 00:13:30.657 { 00:13:30.657 "name": "pt1", 00:13:30.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:30.657 "is_configured": true, 00:13:30.657 "data_offset": 256, 00:13:30.657 "data_size": 7936 00:13:30.657 }, 00:13:30.657 { 00:13:30.657 "name": "pt2", 00:13:30.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:30.657 "is_configured": true, 00:13:30.657 "data_offset": 256, 00:13:30.657 "data_size": 7936 00:13:30.657 } 00:13:30.657 ] 00:13:30.657 } 00:13:30.657 } 00:13:30.657 }' 00:13:30.657 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:30.657 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:30.657 pt2' 00:13:30.657 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.915 [2024-11-26 19:54:21.696145] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5045bd9c-9ec8-46a4-8e5d-202641a7c2fc 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 5045bd9c-9ec8-46a4-8e5d-202641a7c2fc ']' 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.915 [2024-11-26 19:54:21.715879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:30.915 [2024-11-26 19:54:21.715900] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.915 [2024-11-26 19:54:21.715973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.915 [2024-11-26 19:54:21.716029] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.915 [2024-11-26 19:54:21.716039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:30.915 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.916 [2024-11-26 19:54:21.807930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:30.916 [2024-11-26 19:54:21.809640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:30.916 [2024-11-26 19:54:21.809700] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:30.916 [2024-11-26 19:54:21.809751] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:30.916 [2024-11-26 19:54:21.809763] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:30.916 [2024-11-26 19:54:21.809773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:30.916 request: 00:13:30.916 { 00:13:30.916 "name": "raid_bdev1", 00:13:30.916 "raid_level": "raid1", 00:13:30.916 "base_bdevs": [ 00:13:30.916 "malloc1", 00:13:30.916 "malloc2" 00:13:30.916 ], 00:13:30.916 "superblock": false, 00:13:30.916 "method": "bdev_raid_create", 00:13:30.916 "req_id": 1 00:13:30.916 } 00:13:30.916 Got JSON-RPC error response 00:13:30.916 response: 00:13:30.916 { 00:13:30.916 "code": -17, 00:13:30.916 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:30.916 } 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.916 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:30.916 [2024-11-26 19:54:21.847911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:30.916 [2024-11-26 19:54:21.847958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.916 [2024-11-26 19:54:21.847975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:30.916 [2024-11-26 19:54:21.847984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.174 [2024-11-26 19:54:21.849945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.174 [2024-11-26 19:54:21.850057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:31.174 [2024-11-26 19:54:21.850138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:31.174 [2024-11-26 19:54:21.850190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:31.174 pt1 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.174 "name": "raid_bdev1", 00:13:31.174 "uuid": "5045bd9c-9ec8-46a4-8e5d-202641a7c2fc", 00:13:31.174 "strip_size_kb": 0, 00:13:31.174 "state": "configuring", 00:13:31.174 "raid_level": "raid1", 00:13:31.174 "superblock": true, 00:13:31.174 "num_base_bdevs": 2, 00:13:31.174 "num_base_bdevs_discovered": 1, 00:13:31.174 "num_base_bdevs_operational": 2, 00:13:31.174 "base_bdevs_list": [ 00:13:31.174 { 00:13:31.174 "name": "pt1", 00:13:31.174 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:31.174 "is_configured": true, 00:13:31.174 "data_offset": 256, 00:13:31.174 "data_size": 7936 00:13:31.174 }, 00:13:31.174 { 00:13:31.174 "name": null, 00:13:31.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:31.174 "is_configured": false, 00:13:31.174 "data_offset": 256, 00:13:31.174 "data_size": 7936 00:13:31.174 } 00:13:31.174 ] 00:13:31.174 }' 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.174 19:54:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:31.432 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:31.432 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:31.432 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:31.432 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:31.432 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.432 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:31.432 [2024-11-26 19:54:22.184019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:31.432 [2024-11-26 19:54:22.184086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.432 [2024-11-26 19:54:22.184103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:31.433 [2024-11-26 19:54:22.184113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.433 [2024-11-26 19:54:22.184536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.433 [2024-11-26 19:54:22.184587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:31.433 [2024-11-26 19:54:22.184662] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:31.433 [2024-11-26 19:54:22.184687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:31.433 [2024-11-26 19:54:22.184792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:31.433 [2024-11-26 19:54:22.184802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:13:31.433 [2024-11-26 19:54:22.185020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:31.433 [2024-11-26 19:54:22.185138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:31.433 [2024-11-26 19:54:22.185145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:31.433 [2024-11-26 19:54:22.185255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.433 pt2 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.433 "name": "raid_bdev1", 00:13:31.433 "uuid": "5045bd9c-9ec8-46a4-8e5d-202641a7c2fc", 00:13:31.433 "strip_size_kb": 0, 00:13:31.433 "state": "online", 00:13:31.433 "raid_level": "raid1", 00:13:31.433 "superblock": true, 00:13:31.433 "num_base_bdevs": 2, 00:13:31.433 "num_base_bdevs_discovered": 2, 00:13:31.433 "num_base_bdevs_operational": 2, 00:13:31.433 "base_bdevs_list": [ 00:13:31.433 { 00:13:31.433 "name": "pt1", 00:13:31.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:31.433 "is_configured": true, 00:13:31.433 "data_offset": 256, 00:13:31.433 "data_size": 7936 00:13:31.433 }, 00:13:31.433 { 00:13:31.433 "name": "pt2", 00:13:31.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:31.433 "is_configured": true, 00:13:31.433 "data_offset": 256, 00:13:31.433 "data_size": 7936 00:13:31.433 } 00:13:31.433 ] 00:13:31.433 }' 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.433 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:31.691 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:31.691 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:31.691 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:31.691 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:31.691 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:13:31.691 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:31.691 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:31.691 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:31.691 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.691 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:31.691 [2024-11-26 19:54:22.484293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.691 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.691 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:31.691 "name": "raid_bdev1", 00:13:31.691 "aliases": [ 00:13:31.691 "5045bd9c-9ec8-46a4-8e5d-202641a7c2fc" 00:13:31.691 ], 00:13:31.691 "product_name": "Raid Volume", 00:13:31.691 "block_size": 4096, 00:13:31.691 "num_blocks": 7936, 00:13:31.691 "uuid": "5045bd9c-9ec8-46a4-8e5d-202641a7c2fc", 00:13:31.691 "assigned_rate_limits": { 00:13:31.691 "rw_ios_per_sec": 0, 00:13:31.691 "rw_mbytes_per_sec": 0, 00:13:31.691 "r_mbytes_per_sec": 0, 00:13:31.691 "w_mbytes_per_sec": 0 00:13:31.691 }, 00:13:31.691 "claimed": false, 00:13:31.691 "zoned": false, 00:13:31.691 "supported_io_types": { 00:13:31.691 "read": true, 00:13:31.691 "write": true, 00:13:31.691 "unmap": false, 00:13:31.691 "flush": false, 00:13:31.691 "reset": true, 00:13:31.691 "nvme_admin": false, 00:13:31.691 "nvme_io": false, 00:13:31.691 "nvme_io_md": false, 00:13:31.691 "write_zeroes": true, 00:13:31.691 "zcopy": false, 00:13:31.691 "get_zone_info": false, 00:13:31.691 "zone_management": false, 00:13:31.691 "zone_append": false, 00:13:31.691 "compare": false, 00:13:31.691 "compare_and_write": false, 00:13:31.691 "abort": false, 00:13:31.691 "seek_hole": false, 00:13:31.691 "seek_data": false, 00:13:31.691 "copy": false, 00:13:31.691 "nvme_iov_md": false 00:13:31.691 }, 00:13:31.691 "memory_domains": [ 00:13:31.691 { 00:13:31.691 "dma_device_id": "system", 00:13:31.691 "dma_device_type": 1 00:13:31.691 }, 00:13:31.691 { 00:13:31.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.691 "dma_device_type": 2 00:13:31.691 }, 00:13:31.691 { 00:13:31.691 "dma_device_id": "system", 00:13:31.691 "dma_device_type": 1 00:13:31.691 }, 00:13:31.691 { 00:13:31.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.691 "dma_device_type": 2 00:13:31.691 } 00:13:31.691 ], 00:13:31.691 "driver_specific": { 00:13:31.691 "raid": { 00:13:31.691 "uuid": "5045bd9c-9ec8-46a4-8e5d-202641a7c2fc", 00:13:31.691 "strip_size_kb": 0, 00:13:31.691 "state": "online", 00:13:31.691 "raid_level": "raid1", 00:13:31.691 "superblock": true, 00:13:31.691 "num_base_bdevs": 2, 00:13:31.691 "num_base_bdevs_discovered": 2, 00:13:31.691 "num_base_bdevs_operational": 2, 00:13:31.691 "base_bdevs_list": [ 00:13:31.691 { 00:13:31.691 "name": "pt1", 00:13:31.691 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:31.691 "is_configured": true, 00:13:31.691 "data_offset": 256, 00:13:31.691 "data_size": 7936 00:13:31.691 }, 00:13:31.691 { 00:13:31.691 "name": "pt2", 00:13:31.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:31.691 "is_configured": true, 00:13:31.691 "data_offset": 256, 00:13:31.691 "data_size": 7936 00:13:31.691 } 00:13:31.691 ] 00:13:31.691 } 00:13:31.691 } 00:13:31.691 }' 00:13:31.691 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:31.692 pt2' 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:31.692 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.949 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:31.950 [2024-11-26 19:54:22.644278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 5045bd9c-9ec8-46a4-8e5d-202641a7c2fc '!=' 5045bd9c-9ec8-46a4-8e5d-202641a7c2fc ']' 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:31.950 [2024-11-26 19:54:22.676091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.950 "name": "raid_bdev1", 00:13:31.950 "uuid": "5045bd9c-9ec8-46a4-8e5d-202641a7c2fc", 00:13:31.950 "strip_size_kb": 0, 00:13:31.950 "state": "online", 00:13:31.950 "raid_level": "raid1", 00:13:31.950 "superblock": true, 00:13:31.950 "num_base_bdevs": 2, 00:13:31.950 "num_base_bdevs_discovered": 1, 00:13:31.950 "num_base_bdevs_operational": 1, 00:13:31.950 "base_bdevs_list": [ 00:13:31.950 { 00:13:31.950 "name": null, 00:13:31.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.950 "is_configured": false, 00:13:31.950 "data_offset": 0, 00:13:31.950 "data_size": 7936 00:13:31.950 }, 00:13:31.950 { 00:13:31.950 "name": "pt2", 00:13:31.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:31.950 "is_configured": true, 00:13:31.950 "data_offset": 256, 00:13:31.950 "data_size": 7936 00:13:31.950 } 00:13:31.950 ] 00:13:31.950 }' 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.950 19:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:32.209 [2024-11-26 19:54:23.008153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:32.209 [2024-11-26 19:54:23.008180] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:32.209 [2024-11-26 19:54:23.008252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.209 [2024-11-26 19:54:23.008296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.209 [2024-11-26 19:54:23.008306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:32.209 [2024-11-26 19:54:23.060141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:32.209 [2024-11-26 19:54:23.060193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.209 [2024-11-26 19:54:23.060209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:32.209 [2024-11-26 19:54:23.060218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.209 [2024-11-26 19:54:23.062223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.209 [2024-11-26 19:54:23.062256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:32.209 [2024-11-26 19:54:23.062328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:32.209 [2024-11-26 19:54:23.062383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:32.209 [2024-11-26 19:54:23.062484] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:32.209 [2024-11-26 19:54:23.062496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:13:32.209 [2024-11-26 19:54:23.062701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:32.209 [2024-11-26 19:54:23.062823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:32.209 [2024-11-26 19:54:23.062831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:32.209 [2024-11-26 19:54:23.062946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.209 pt2 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.209 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.210 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.210 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.210 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.210 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:32.210 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.210 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.210 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.210 "name": "raid_bdev1", 00:13:32.210 "uuid": "5045bd9c-9ec8-46a4-8e5d-202641a7c2fc", 00:13:32.210 "strip_size_kb": 0, 00:13:32.210 "state": "online", 00:13:32.210 "raid_level": "raid1", 00:13:32.210 "superblock": true, 00:13:32.210 "num_base_bdevs": 2, 00:13:32.210 "num_base_bdevs_discovered": 1, 00:13:32.210 "num_base_bdevs_operational": 1, 00:13:32.210 "base_bdevs_list": [ 00:13:32.210 { 00:13:32.210 "name": null, 00:13:32.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.210 "is_configured": false, 00:13:32.210 "data_offset": 256, 00:13:32.210 "data_size": 7936 00:13:32.210 }, 00:13:32.210 { 00:13:32.210 "name": "pt2", 00:13:32.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:32.210 "is_configured": true, 00:13:32.210 "data_offset": 256, 00:13:32.210 "data_size": 7936 00:13:32.210 } 00:13:32.210 ] 00:13:32.210 }' 00:13:32.210 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.210 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:32.469 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:32.469 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.469 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:32.469 [2024-11-26 19:54:23.392198] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:32.469 [2024-11-26 19:54:23.392226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:32.469 [2024-11-26 19:54:23.392297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.469 [2024-11-26 19:54:23.392355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.469 [2024-11-26 19:54:23.392364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:32.469 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.469 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.469 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.469 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:32.469 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:32.730 [2024-11-26 19:54:23.432213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:32.730 [2024-11-26 19:54:23.432271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.730 [2024-11-26 19:54:23.432289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:32.730 [2024-11-26 19:54:23.432298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.730 [2024-11-26 19:54:23.434312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.730 [2024-11-26 19:54:23.434351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:32.730 [2024-11-26 19:54:23.434428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:32.730 [2024-11-26 19:54:23.434469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:32.730 [2024-11-26 19:54:23.434589] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:32.730 [2024-11-26 19:54:23.434598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:32.730 [2024-11-26 19:54:23.434612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:32.730 [2024-11-26 19:54:23.434654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:32.730 [2024-11-26 19:54:23.434717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:32.730 [2024-11-26 19:54:23.434724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:13:32.730 [2024-11-26 19:54:23.434943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:32.730 [2024-11-26 19:54:23.435071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:32.730 [2024-11-26 19:54:23.435084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:32.730 [2024-11-26 19:54:23.435199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.730 pt1 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.730 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.731 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:32.731 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.731 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.731 "name": "raid_bdev1", 00:13:32.731 "uuid": "5045bd9c-9ec8-46a4-8e5d-202641a7c2fc", 00:13:32.731 "strip_size_kb": 0, 00:13:32.731 "state": "online", 00:13:32.731 "raid_level": "raid1", 00:13:32.731 "superblock": true, 00:13:32.731 "num_base_bdevs": 2, 00:13:32.731 "num_base_bdevs_discovered": 1, 00:13:32.731 "num_base_bdevs_operational": 1, 00:13:32.731 "base_bdevs_list": [ 00:13:32.731 { 00:13:32.731 "name": null, 00:13:32.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.731 "is_configured": false, 00:13:32.731 "data_offset": 256, 00:13:32.731 "data_size": 7936 00:13:32.731 }, 00:13:32.731 { 00:13:32.731 "name": "pt2", 00:13:32.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:32.731 "is_configured": true, 00:13:32.731 "data_offset": 256, 00:13:32.731 "data_size": 7936 00:13:32.731 } 00:13:32.731 ] 00:13:32.731 }' 00:13:32.731 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.731 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:32.989 [2024-11-26 19:54:23.760494] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 5045bd9c-9ec8-46a4-8e5d-202641a7c2fc '!=' 5045bd9c-9ec8-46a4-8e5d-202641a7c2fc ']' 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 83669 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 83669 ']' 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 83669 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83669 00:13:32.989 killing process with pid 83669 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83669' 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 83669 00:13:32.989 [2024-11-26 19:54:23.814596] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.989 19:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 83669 00:13:32.989 [2024-11-26 19:54:23.814687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.989 [2024-11-26 19:54:23.814732] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.989 [2024-11-26 19:54:23.814746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:32.989 [2024-11-26 19:54:23.921150] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.923 19:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:13:33.923 ************************************ 00:13:33.923 END TEST raid_superblock_test_4k 00:13:33.923 ************************************ 00:13:33.923 00:13:33.923 real 0m4.277s 00:13:33.923 user 0m6.477s 00:13:33.923 sys 0m0.775s 00:13:33.923 19:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.923 19:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:13:33.923 19:54:24 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:13:33.923 19:54:24 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:13:33.923 19:54:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:33.923 19:54:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.923 19:54:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.923 ************************************ 00:13:33.924 START TEST raid_rebuild_test_sb_4k 00:13:33.924 ************************************ 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:33.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=83975 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 83975 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 83975 ']' 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:33.924 19:54:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:33.924 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:33.924 Zero copy mechanism will not be used. 00:13:33.924 [2024-11-26 19:54:24.643818] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:13:33.924 [2024-11-26 19:54:24.643958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83975 ] 00:13:33.924 [2024-11-26 19:54:24.801275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.182 [2024-11-26 19:54:24.900295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.182 [2024-11-26 19:54:25.020389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.182 [2024-11-26 19:54:25.020427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:34.749 BaseBdev1_malloc 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:34.749 [2024-11-26 19:54:25.518056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:34.749 [2024-11-26 19:54:25.518114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.749 [2024-11-26 19:54:25.518134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:34.749 [2024-11-26 19:54:25.518144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.749 [2024-11-26 19:54:25.520082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.749 [2024-11-26 19:54:25.520227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:34.749 BaseBdev1 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:34.749 BaseBdev2_malloc 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:34.749 [2024-11-26 19:54:25.551245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:34.749 [2024-11-26 19:54:25.551390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.749 [2024-11-26 19:54:25.551413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:34.749 [2024-11-26 19:54:25.551423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.749 [2024-11-26 19:54:25.553250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.749 [2024-11-26 19:54:25.553276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:34.749 BaseBdev2 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:34.749 spare_malloc 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:34.749 spare_delay 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:34.749 [2024-11-26 19:54:25.607711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:34.749 [2024-11-26 19:54:25.607758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.749 [2024-11-26 19:54:25.607774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:34.749 [2024-11-26 19:54:25.607784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.749 [2024-11-26 19:54:25.609614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.749 [2024-11-26 19:54:25.609644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:34.749 spare 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:34.749 [2024-11-26 19:54:25.615762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.749 [2024-11-26 19:54:25.617297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.749 [2024-11-26 19:54:25.617457] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:34.749 [2024-11-26 19:54:25.617469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:13:34.749 [2024-11-26 19:54:25.617666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:34.749 [2024-11-26 19:54:25.617796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:34.749 [2024-11-26 19:54:25.617803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:34.749 [2024-11-26 19:54:25.617913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.749 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.749 "name": "raid_bdev1", 00:13:34.749 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:34.749 "strip_size_kb": 0, 00:13:34.749 "state": "online", 00:13:34.749 "raid_level": "raid1", 00:13:34.749 "superblock": true, 00:13:34.749 "num_base_bdevs": 2, 00:13:34.749 "num_base_bdevs_discovered": 2, 00:13:34.749 "num_base_bdevs_operational": 2, 00:13:34.750 "base_bdevs_list": [ 00:13:34.750 { 00:13:34.750 "name": "BaseBdev1", 00:13:34.750 "uuid": "2c57d802-5b9f-52bc-b5eb-b9c3d0e72f0e", 00:13:34.750 "is_configured": true, 00:13:34.750 "data_offset": 256, 00:13:34.750 "data_size": 7936 00:13:34.750 }, 00:13:34.750 { 00:13:34.750 "name": "BaseBdev2", 00:13:34.750 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:34.750 "is_configured": true, 00:13:34.750 "data_offset": 256, 00:13:34.750 "data_size": 7936 00:13:34.750 } 00:13:34.750 ] 00:13:34.750 }' 00:13:34.750 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.750 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:35.008 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.008 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.008 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:35.275 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:35.275 [2024-11-26 19:54:25.948083] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.275 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.275 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:13:35.275 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:35.275 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.275 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.275 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:35.275 19:54:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.275 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:13:35.275 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:35.275 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:35.275 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:35.275 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:35.275 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.275 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:35.275 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:35.275 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:35.275 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:35.275 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:13:35.275 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:35.275 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.275 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:35.275 [2024-11-26 19:54:26.203931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:35.533 /dev/nbd0 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.533 1+0 records in 00:13:35.533 1+0 records out 00:13:35.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234842 s, 17.4 MB/s 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:35.533 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:13:36.098 7936+0 records in 00:13:36.098 7936+0 records out 00:13:36.098 32505856 bytes (33 MB, 31 MiB) copied, 0.520644 s, 62.4 MB/s 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:36.098 [2024-11-26 19:54:26.977910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.098 19:54:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 [2024-11-26 19:54:27.001977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.098 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.357 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.357 "name": "raid_bdev1", 00:13:36.357 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:36.357 "strip_size_kb": 0, 00:13:36.357 "state": "online", 00:13:36.357 "raid_level": "raid1", 00:13:36.357 "superblock": true, 00:13:36.357 "num_base_bdevs": 2, 00:13:36.357 "num_base_bdevs_discovered": 1, 00:13:36.357 "num_base_bdevs_operational": 1, 00:13:36.357 "base_bdevs_list": [ 00:13:36.357 { 00:13:36.357 "name": null, 00:13:36.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.357 "is_configured": false, 00:13:36.357 "data_offset": 0, 00:13:36.357 "data_size": 7936 00:13:36.357 }, 00:13:36.357 { 00:13:36.357 "name": "BaseBdev2", 00:13:36.357 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:36.357 "is_configured": true, 00:13:36.357 "data_offset": 256, 00:13:36.357 "data_size": 7936 00:13:36.357 } 00:13:36.357 ] 00:13:36.357 }' 00:13:36.357 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.357 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:36.615 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:36.615 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.615 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:36.615 [2024-11-26 19:54:27.326053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.615 [2024-11-26 19:54:27.336185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:13:36.615 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.615 19:54:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:36.615 [2024-11-26 19:54:27.337839] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.548 "name": "raid_bdev1", 00:13:37.548 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:37.548 "strip_size_kb": 0, 00:13:37.548 "state": "online", 00:13:37.548 "raid_level": "raid1", 00:13:37.548 "superblock": true, 00:13:37.548 "num_base_bdevs": 2, 00:13:37.548 "num_base_bdevs_discovered": 2, 00:13:37.548 "num_base_bdevs_operational": 2, 00:13:37.548 "process": { 00:13:37.548 "type": "rebuild", 00:13:37.548 "target": "spare", 00:13:37.548 "progress": { 00:13:37.548 "blocks": 2560, 00:13:37.548 "percent": 32 00:13:37.548 } 00:13:37.548 }, 00:13:37.548 "base_bdevs_list": [ 00:13:37.548 { 00:13:37.548 "name": "spare", 00:13:37.548 "uuid": "7ae37f37-2915-58ed-af7d-bc27b9f19b56", 00:13:37.548 "is_configured": true, 00:13:37.548 "data_offset": 256, 00:13:37.548 "data_size": 7936 00:13:37.548 }, 00:13:37.548 { 00:13:37.548 "name": "BaseBdev2", 00:13:37.548 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:37.548 "is_configured": true, 00:13:37.548 "data_offset": 256, 00:13:37.548 "data_size": 7936 00:13:37.548 } 00:13:37.548 ] 00:13:37.548 }' 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.548 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:37.548 [2024-11-26 19:54:28.448066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.848 [2024-11-26 19:54:28.544598] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:37.848 [2024-11-26 19:54:28.544659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.848 [2024-11-26 19:54:28.544671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.848 [2024-11-26 19:54:28.544684] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.848 "name": "raid_bdev1", 00:13:37.848 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:37.848 "strip_size_kb": 0, 00:13:37.848 "state": "online", 00:13:37.848 "raid_level": "raid1", 00:13:37.848 "superblock": true, 00:13:37.848 "num_base_bdevs": 2, 00:13:37.848 "num_base_bdevs_discovered": 1, 00:13:37.848 "num_base_bdevs_operational": 1, 00:13:37.848 "base_bdevs_list": [ 00:13:37.848 { 00:13:37.848 "name": null, 00:13:37.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.848 "is_configured": false, 00:13:37.848 "data_offset": 0, 00:13:37.848 "data_size": 7936 00:13:37.848 }, 00:13:37.848 { 00:13:37.848 "name": "BaseBdev2", 00:13:37.848 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:37.848 "is_configured": true, 00:13:37.848 "data_offset": 256, 00:13:37.848 "data_size": 7936 00:13:37.848 } 00:13:37.848 ] 00:13:37.848 }' 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.848 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:38.127 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.127 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.127 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.127 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.127 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.127 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.127 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.127 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.127 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:38.127 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.127 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.127 "name": "raid_bdev1", 00:13:38.127 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:38.127 "strip_size_kb": 0, 00:13:38.127 "state": "online", 00:13:38.127 "raid_level": "raid1", 00:13:38.127 "superblock": true, 00:13:38.127 "num_base_bdevs": 2, 00:13:38.127 "num_base_bdevs_discovered": 1, 00:13:38.127 "num_base_bdevs_operational": 1, 00:13:38.127 "base_bdevs_list": [ 00:13:38.127 { 00:13:38.127 "name": null, 00:13:38.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.127 "is_configured": false, 00:13:38.127 "data_offset": 0, 00:13:38.127 "data_size": 7936 00:13:38.127 }, 00:13:38.127 { 00:13:38.127 "name": "BaseBdev2", 00:13:38.127 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:38.127 "is_configured": true, 00:13:38.127 "data_offset": 256, 00:13:38.127 "data_size": 7936 00:13:38.127 } 00:13:38.127 ] 00:13:38.127 }' 00:13:38.127 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.127 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.127 19:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.127 19:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.127 19:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.127 19:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.127 19:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:38.127 [2024-11-26 19:54:29.012568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.127 [2024-11-26 19:54:29.021831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:13:38.127 19:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.127 19:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:38.127 [2024-11-26 19:54:29.023539] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.509 "name": "raid_bdev1", 00:13:39.509 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:39.509 "strip_size_kb": 0, 00:13:39.509 "state": "online", 00:13:39.509 "raid_level": "raid1", 00:13:39.509 "superblock": true, 00:13:39.509 "num_base_bdevs": 2, 00:13:39.509 "num_base_bdevs_discovered": 2, 00:13:39.509 "num_base_bdevs_operational": 2, 00:13:39.509 "process": { 00:13:39.509 "type": "rebuild", 00:13:39.509 "target": "spare", 00:13:39.509 "progress": { 00:13:39.509 "blocks": 2560, 00:13:39.509 "percent": 32 00:13:39.509 } 00:13:39.509 }, 00:13:39.509 "base_bdevs_list": [ 00:13:39.509 { 00:13:39.509 "name": "spare", 00:13:39.509 "uuid": "7ae37f37-2915-58ed-af7d-bc27b9f19b56", 00:13:39.509 "is_configured": true, 00:13:39.509 "data_offset": 256, 00:13:39.509 "data_size": 7936 00:13:39.509 }, 00:13:39.509 { 00:13:39.509 "name": "BaseBdev2", 00:13:39.509 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:39.509 "is_configured": true, 00:13:39.509 "data_offset": 256, 00:13:39.509 "data_size": 7936 00:13:39.509 } 00:13:39.509 ] 00:13:39.509 }' 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:39.509 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=540 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.509 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.510 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.510 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.510 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.510 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.510 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.510 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:39.510 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.510 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.510 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.510 "name": "raid_bdev1", 00:13:39.510 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:39.510 "strip_size_kb": 0, 00:13:39.510 "state": "online", 00:13:39.510 "raid_level": "raid1", 00:13:39.510 "superblock": true, 00:13:39.510 "num_base_bdevs": 2, 00:13:39.510 "num_base_bdevs_discovered": 2, 00:13:39.510 "num_base_bdevs_operational": 2, 00:13:39.510 "process": { 00:13:39.510 "type": "rebuild", 00:13:39.510 "target": "spare", 00:13:39.510 "progress": { 00:13:39.510 "blocks": 2816, 00:13:39.510 "percent": 35 00:13:39.510 } 00:13:39.510 }, 00:13:39.510 "base_bdevs_list": [ 00:13:39.510 { 00:13:39.510 "name": "spare", 00:13:39.510 "uuid": "7ae37f37-2915-58ed-af7d-bc27b9f19b56", 00:13:39.510 "is_configured": true, 00:13:39.510 "data_offset": 256, 00:13:39.510 "data_size": 7936 00:13:39.510 }, 00:13:39.510 { 00:13:39.510 "name": "BaseBdev2", 00:13:39.510 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:39.510 "is_configured": true, 00:13:39.510 "data_offset": 256, 00:13:39.510 "data_size": 7936 00:13:39.510 } 00:13:39.510 ] 00:13:39.510 }' 00:13:39.510 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.510 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.510 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.510 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.510 19:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.443 "name": "raid_bdev1", 00:13:40.443 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:40.443 "strip_size_kb": 0, 00:13:40.443 "state": "online", 00:13:40.443 "raid_level": "raid1", 00:13:40.443 "superblock": true, 00:13:40.443 "num_base_bdevs": 2, 00:13:40.443 "num_base_bdevs_discovered": 2, 00:13:40.443 "num_base_bdevs_operational": 2, 00:13:40.443 "process": { 00:13:40.443 "type": "rebuild", 00:13:40.443 "target": "spare", 00:13:40.443 "progress": { 00:13:40.443 "blocks": 5632, 00:13:40.443 "percent": 70 00:13:40.443 } 00:13:40.443 }, 00:13:40.443 "base_bdevs_list": [ 00:13:40.443 { 00:13:40.443 "name": "spare", 00:13:40.443 "uuid": "7ae37f37-2915-58ed-af7d-bc27b9f19b56", 00:13:40.443 "is_configured": true, 00:13:40.443 "data_offset": 256, 00:13:40.443 "data_size": 7936 00:13:40.443 }, 00:13:40.443 { 00:13:40.443 "name": "BaseBdev2", 00:13:40.443 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:40.443 "is_configured": true, 00:13:40.443 "data_offset": 256, 00:13:40.443 "data_size": 7936 00:13:40.443 } 00:13:40.443 ] 00:13:40.443 }' 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.443 19:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.376 [2024-11-26 19:54:32.140493] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:41.376 [2024-11-26 19:54:32.140564] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:41.376 [2024-11-26 19:54:32.140667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.634 "name": "raid_bdev1", 00:13:41.634 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:41.634 "strip_size_kb": 0, 00:13:41.634 "state": "online", 00:13:41.634 "raid_level": "raid1", 00:13:41.634 "superblock": true, 00:13:41.634 "num_base_bdevs": 2, 00:13:41.634 "num_base_bdevs_discovered": 2, 00:13:41.634 "num_base_bdevs_operational": 2, 00:13:41.634 "base_bdevs_list": [ 00:13:41.634 { 00:13:41.634 "name": "spare", 00:13:41.634 "uuid": "7ae37f37-2915-58ed-af7d-bc27b9f19b56", 00:13:41.634 "is_configured": true, 00:13:41.634 "data_offset": 256, 00:13:41.634 "data_size": 7936 00:13:41.634 }, 00:13:41.634 { 00:13:41.634 "name": "BaseBdev2", 00:13:41.634 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:41.634 "is_configured": true, 00:13:41.634 "data_offset": 256, 00:13:41.634 "data_size": 7936 00:13:41.634 } 00:13:41.634 ] 00:13:41.634 }' 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.634 "name": "raid_bdev1", 00:13:41.634 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:41.634 "strip_size_kb": 0, 00:13:41.634 "state": "online", 00:13:41.634 "raid_level": "raid1", 00:13:41.634 "superblock": true, 00:13:41.634 "num_base_bdevs": 2, 00:13:41.634 "num_base_bdevs_discovered": 2, 00:13:41.634 "num_base_bdevs_operational": 2, 00:13:41.634 "base_bdevs_list": [ 00:13:41.634 { 00:13:41.634 "name": "spare", 00:13:41.634 "uuid": "7ae37f37-2915-58ed-af7d-bc27b9f19b56", 00:13:41.634 "is_configured": true, 00:13:41.634 "data_offset": 256, 00:13:41.634 "data_size": 7936 00:13:41.634 }, 00:13:41.634 { 00:13:41.634 "name": "BaseBdev2", 00:13:41.634 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:41.634 "is_configured": true, 00:13:41.634 "data_offset": 256, 00:13:41.634 "data_size": 7936 00:13:41.634 } 00:13:41.634 ] 00:13:41.634 }' 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.634 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.893 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.893 "name": "raid_bdev1", 00:13:41.893 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:41.893 "strip_size_kb": 0, 00:13:41.893 "state": "online", 00:13:41.893 "raid_level": "raid1", 00:13:41.893 "superblock": true, 00:13:41.893 "num_base_bdevs": 2, 00:13:41.893 "num_base_bdevs_discovered": 2, 00:13:41.893 "num_base_bdevs_operational": 2, 00:13:41.893 "base_bdevs_list": [ 00:13:41.893 { 00:13:41.893 "name": "spare", 00:13:41.893 "uuid": "7ae37f37-2915-58ed-af7d-bc27b9f19b56", 00:13:41.893 "is_configured": true, 00:13:41.893 "data_offset": 256, 00:13:41.893 "data_size": 7936 00:13:41.893 }, 00:13:41.893 { 00:13:41.893 "name": "BaseBdev2", 00:13:41.893 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:41.893 "is_configured": true, 00:13:41.893 "data_offset": 256, 00:13:41.893 "data_size": 7936 00:13:41.893 } 00:13:41.893 ] 00:13:41.893 }' 00:13:41.893 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.893 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:42.151 [2024-11-26 19:54:32.867967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.151 [2024-11-26 19:54:32.867998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.151 [2024-11-26 19:54:32.868078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.151 [2024-11-26 19:54:32.868146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.151 [2024-11-26 19:54:32.868157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:42.151 19:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:42.409 /dev/nbd0 00:13:42.409 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:42.409 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:42.409 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:42.409 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:13:42.409 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:42.409 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:42.409 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:42.409 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:13:42.409 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:42.409 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:42.410 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.410 1+0 records in 00:13:42.410 1+0 records out 00:13:42.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548482 s, 7.5 MB/s 00:13:42.410 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.410 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:13:42.410 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.410 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:42.410 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:13:42.410 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.410 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:42.410 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:42.667 /dev/nbd1 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.667 1+0 records in 00:13:42.667 1+0 records out 00:13:42.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462937 s, 8.8 MB/s 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.667 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:42.925 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:42.925 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:42.925 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:42.925 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.925 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.925 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:42.925 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:13:42.925 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.925 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.925 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:43.188 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:43.188 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:43.188 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:43.188 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.188 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.188 19:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:43.188 [2024-11-26 19:54:34.018067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:43.188 [2024-11-26 19:54:34.018127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.188 [2024-11-26 19:54:34.018154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:43.188 [2024-11-26 19:54:34.018164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.188 [2024-11-26 19:54:34.020601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.188 [2024-11-26 19:54:34.020638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:43.188 [2024-11-26 19:54:34.020733] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:43.188 [2024-11-26 19:54:34.020797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:43.188 [2024-11-26 19:54:34.020941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:43.188 spare 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:43.188 [2024-11-26 19:54:34.121044] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:43.188 [2024-11-26 19:54:34.121077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:13:43.188 [2024-11-26 19:54:34.121411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:13:43.188 [2024-11-26 19:54:34.121624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:43.188 [2024-11-26 19:54:34.121639] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:43.188 [2024-11-26 19:54:34.121823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.188 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.445 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.445 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.445 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.445 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.445 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.445 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.446 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.446 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.446 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.446 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:43.446 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.446 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.446 "name": "raid_bdev1", 00:13:43.446 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:43.446 "strip_size_kb": 0, 00:13:43.446 "state": "online", 00:13:43.446 "raid_level": "raid1", 00:13:43.446 "superblock": true, 00:13:43.446 "num_base_bdevs": 2, 00:13:43.446 "num_base_bdevs_discovered": 2, 00:13:43.446 "num_base_bdevs_operational": 2, 00:13:43.446 "base_bdevs_list": [ 00:13:43.446 { 00:13:43.446 "name": "spare", 00:13:43.446 "uuid": "7ae37f37-2915-58ed-af7d-bc27b9f19b56", 00:13:43.446 "is_configured": true, 00:13:43.446 "data_offset": 256, 00:13:43.446 "data_size": 7936 00:13:43.446 }, 00:13:43.446 { 00:13:43.446 "name": "BaseBdev2", 00:13:43.446 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:43.446 "is_configured": true, 00:13:43.446 "data_offset": 256, 00:13:43.446 "data_size": 7936 00:13:43.446 } 00:13:43.446 ] 00:13:43.446 }' 00:13:43.446 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.446 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.803 "name": "raid_bdev1", 00:13:43.803 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:43.803 "strip_size_kb": 0, 00:13:43.803 "state": "online", 00:13:43.803 "raid_level": "raid1", 00:13:43.803 "superblock": true, 00:13:43.803 "num_base_bdevs": 2, 00:13:43.803 "num_base_bdevs_discovered": 2, 00:13:43.803 "num_base_bdevs_operational": 2, 00:13:43.803 "base_bdevs_list": [ 00:13:43.803 { 00:13:43.803 "name": "spare", 00:13:43.803 "uuid": "7ae37f37-2915-58ed-af7d-bc27b9f19b56", 00:13:43.803 "is_configured": true, 00:13:43.803 "data_offset": 256, 00:13:43.803 "data_size": 7936 00:13:43.803 }, 00:13:43.803 { 00:13:43.803 "name": "BaseBdev2", 00:13:43.803 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:43.803 "is_configured": true, 00:13:43.803 "data_offset": 256, 00:13:43.803 "data_size": 7936 00:13:43.803 } 00:13:43.803 ] 00:13:43.803 }' 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:43.803 [2024-11-26 19:54:34.550255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.803 "name": "raid_bdev1", 00:13:43.803 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:43.803 "strip_size_kb": 0, 00:13:43.803 "state": "online", 00:13:43.803 "raid_level": "raid1", 00:13:43.803 "superblock": true, 00:13:43.803 "num_base_bdevs": 2, 00:13:43.803 "num_base_bdevs_discovered": 1, 00:13:43.803 "num_base_bdevs_operational": 1, 00:13:43.803 "base_bdevs_list": [ 00:13:43.803 { 00:13:43.803 "name": null, 00:13:43.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.803 "is_configured": false, 00:13:43.803 "data_offset": 0, 00:13:43.803 "data_size": 7936 00:13:43.803 }, 00:13:43.803 { 00:13:43.803 "name": "BaseBdev2", 00:13:43.803 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:43.803 "is_configured": true, 00:13:43.803 "data_offset": 256, 00:13:43.803 "data_size": 7936 00:13:43.803 } 00:13:43.803 ] 00:13:43.803 }' 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.803 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:44.085 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:44.085 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.085 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:44.085 [2024-11-26 19:54:34.874360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.085 [2024-11-26 19:54:34.874567] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:44.085 [2024-11-26 19:54:34.874584] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:44.085 [2024-11-26 19:54:34.874624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.085 [2024-11-26 19:54:34.886024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:13:44.085 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.085 19:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:44.085 [2024-11-26 19:54:34.888102] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:45.020 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.020 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.020 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.020 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.020 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.020 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.020 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.020 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.020 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:45.020 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.020 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.020 "name": "raid_bdev1", 00:13:45.020 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:45.020 "strip_size_kb": 0, 00:13:45.020 "state": "online", 00:13:45.020 "raid_level": "raid1", 00:13:45.020 "superblock": true, 00:13:45.020 "num_base_bdevs": 2, 00:13:45.020 "num_base_bdevs_discovered": 2, 00:13:45.020 "num_base_bdevs_operational": 2, 00:13:45.020 "process": { 00:13:45.020 "type": "rebuild", 00:13:45.020 "target": "spare", 00:13:45.020 "progress": { 00:13:45.020 "blocks": 2560, 00:13:45.020 "percent": 32 00:13:45.020 } 00:13:45.020 }, 00:13:45.020 "base_bdevs_list": [ 00:13:45.020 { 00:13:45.020 "name": "spare", 00:13:45.020 "uuid": "7ae37f37-2915-58ed-af7d-bc27b9f19b56", 00:13:45.020 "is_configured": true, 00:13:45.020 "data_offset": 256, 00:13:45.020 "data_size": 7936 00:13:45.020 }, 00:13:45.020 { 00:13:45.020 "name": "BaseBdev2", 00:13:45.020 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:45.020 "is_configured": true, 00:13:45.020 "data_offset": 256, 00:13:45.020 "data_size": 7936 00:13:45.020 } 00:13:45.020 ] 00:13:45.020 }' 00:13:45.020 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.278 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.278 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.278 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.278 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:45.278 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.278 19:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:45.278 [2024-11-26 19:54:35.990277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.278 [2024-11-26 19:54:35.994683] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:45.278 [2024-11-26 19:54:35.994736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.278 [2024-11-26 19:54:35.994749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.278 [2024-11-26 19:54:35.994757] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.278 "name": "raid_bdev1", 00:13:45.278 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:45.278 "strip_size_kb": 0, 00:13:45.278 "state": "online", 00:13:45.278 "raid_level": "raid1", 00:13:45.278 "superblock": true, 00:13:45.278 "num_base_bdevs": 2, 00:13:45.278 "num_base_bdevs_discovered": 1, 00:13:45.278 "num_base_bdevs_operational": 1, 00:13:45.278 "base_bdevs_list": [ 00:13:45.278 { 00:13:45.278 "name": null, 00:13:45.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.278 "is_configured": false, 00:13:45.278 "data_offset": 0, 00:13:45.278 "data_size": 7936 00:13:45.278 }, 00:13:45.278 { 00:13:45.278 "name": "BaseBdev2", 00:13:45.278 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:45.278 "is_configured": true, 00:13:45.278 "data_offset": 256, 00:13:45.278 "data_size": 7936 00:13:45.278 } 00:13:45.278 ] 00:13:45.278 }' 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.278 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:45.537 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.537 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.537 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:45.537 [2024-11-26 19:54:36.334441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.537 [2024-11-26 19:54:36.334504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.537 [2024-11-26 19:54:36.334525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:45.537 [2024-11-26 19:54:36.334535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.537 [2024-11-26 19:54:36.334969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.537 [2024-11-26 19:54:36.334985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.537 [2024-11-26 19:54:36.335069] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:45.537 [2024-11-26 19:54:36.335081] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:45.537 [2024-11-26 19:54:36.335092] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:45.537 [2024-11-26 19:54:36.335111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.537 [2024-11-26 19:54:36.344211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:13:45.537 spare 00:13:45.537 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.537 19:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:45.537 [2024-11-26 19:54:36.345981] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.472 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.472 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.472 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.472 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.472 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.472 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.472 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.472 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.472 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:46.472 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.472 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.472 "name": "raid_bdev1", 00:13:46.472 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:46.472 "strip_size_kb": 0, 00:13:46.472 "state": "online", 00:13:46.472 "raid_level": "raid1", 00:13:46.472 "superblock": true, 00:13:46.472 "num_base_bdevs": 2, 00:13:46.472 "num_base_bdevs_discovered": 2, 00:13:46.472 "num_base_bdevs_operational": 2, 00:13:46.472 "process": { 00:13:46.472 "type": "rebuild", 00:13:46.472 "target": "spare", 00:13:46.472 "progress": { 00:13:46.472 "blocks": 2560, 00:13:46.472 "percent": 32 00:13:46.472 } 00:13:46.472 }, 00:13:46.472 "base_bdevs_list": [ 00:13:46.472 { 00:13:46.472 "name": "spare", 00:13:46.472 "uuid": "7ae37f37-2915-58ed-af7d-bc27b9f19b56", 00:13:46.472 "is_configured": true, 00:13:46.472 "data_offset": 256, 00:13:46.472 "data_size": 7936 00:13:46.472 }, 00:13:46.472 { 00:13:46.472 "name": "BaseBdev2", 00:13:46.472 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:46.472 "is_configured": true, 00:13:46.472 "data_offset": 256, 00:13:46.472 "data_size": 7936 00:13:46.472 } 00:13:46.472 ] 00:13:46.472 }' 00:13:46.472 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:46.730 [2024-11-26 19:54:37.456579] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:46.730 [2024-11-26 19:54:37.552564] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:46.730 [2024-11-26 19:54:37.552617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.730 [2024-11-26 19:54:37.552632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:46.730 [2024-11-26 19:54:37.552638] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.730 "name": "raid_bdev1", 00:13:46.730 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:46.730 "strip_size_kb": 0, 00:13:46.730 "state": "online", 00:13:46.730 "raid_level": "raid1", 00:13:46.730 "superblock": true, 00:13:46.730 "num_base_bdevs": 2, 00:13:46.730 "num_base_bdevs_discovered": 1, 00:13:46.730 "num_base_bdevs_operational": 1, 00:13:46.730 "base_bdevs_list": [ 00:13:46.730 { 00:13:46.730 "name": null, 00:13:46.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.730 "is_configured": false, 00:13:46.730 "data_offset": 0, 00:13:46.730 "data_size": 7936 00:13:46.730 }, 00:13:46.730 { 00:13:46.730 "name": "BaseBdev2", 00:13:46.730 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:46.730 "is_configured": true, 00:13:46.730 "data_offset": 256, 00:13:46.730 "data_size": 7936 00:13:46.730 } 00:13:46.730 ] 00:13:46.730 }' 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.730 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:46.988 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:46.988 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.988 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:46.988 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:46.988 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.988 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.988 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.988 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.988 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:46.988 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.988 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.988 "name": "raid_bdev1", 00:13:46.988 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:46.988 "strip_size_kb": 0, 00:13:46.988 "state": "online", 00:13:46.988 "raid_level": "raid1", 00:13:46.988 "superblock": true, 00:13:46.988 "num_base_bdevs": 2, 00:13:46.988 "num_base_bdevs_discovered": 1, 00:13:46.988 "num_base_bdevs_operational": 1, 00:13:46.988 "base_bdevs_list": [ 00:13:46.988 { 00:13:46.988 "name": null, 00:13:46.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.988 "is_configured": false, 00:13:46.988 "data_offset": 0, 00:13:46.988 "data_size": 7936 00:13:46.988 }, 00:13:46.988 { 00:13:46.988 "name": "BaseBdev2", 00:13:46.988 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:46.988 "is_configured": true, 00:13:46.988 "data_offset": 256, 00:13:46.988 "data_size": 7936 00:13:46.988 } 00:13:46.988 ] 00:13:46.988 }' 00:13:46.988 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.247 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.247 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.247 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.247 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:47.247 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.247 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:47.247 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.247 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:47.247 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.247 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:47.247 [2024-11-26 19:54:37.992097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:47.247 [2024-11-26 19:54:37.992243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.247 [2024-11-26 19:54:37.992272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:47.247 [2024-11-26 19:54:37.992281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.247 [2024-11-26 19:54:37.992708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.247 [2024-11-26 19:54:37.992727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.247 [2024-11-26 19:54:37.992798] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:47.247 [2024-11-26 19:54:37.992810] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:47.247 [2024-11-26 19:54:37.992820] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:47.247 [2024-11-26 19:54:37.992829] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:47.247 BaseBdev1 00:13:47.247 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.247 19:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.182 "name": "raid_bdev1", 00:13:48.182 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:48.182 "strip_size_kb": 0, 00:13:48.182 "state": "online", 00:13:48.182 "raid_level": "raid1", 00:13:48.182 "superblock": true, 00:13:48.182 "num_base_bdevs": 2, 00:13:48.182 "num_base_bdevs_discovered": 1, 00:13:48.182 "num_base_bdevs_operational": 1, 00:13:48.182 "base_bdevs_list": [ 00:13:48.182 { 00:13:48.182 "name": null, 00:13:48.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.182 "is_configured": false, 00:13:48.182 "data_offset": 0, 00:13:48.182 "data_size": 7936 00:13:48.182 }, 00:13:48.182 { 00:13:48.182 "name": "BaseBdev2", 00:13:48.182 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:48.182 "is_configured": true, 00:13:48.182 "data_offset": 256, 00:13:48.182 "data_size": 7936 00:13:48.182 } 00:13:48.182 ] 00:13:48.182 }' 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.182 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:48.440 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.440 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.440 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.440 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.440 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.440 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.441 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.441 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:48.441 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.441 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.441 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.441 "name": "raid_bdev1", 00:13:48.441 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:48.441 "strip_size_kb": 0, 00:13:48.441 "state": "online", 00:13:48.441 "raid_level": "raid1", 00:13:48.441 "superblock": true, 00:13:48.441 "num_base_bdevs": 2, 00:13:48.441 "num_base_bdevs_discovered": 1, 00:13:48.441 "num_base_bdevs_operational": 1, 00:13:48.441 "base_bdevs_list": [ 00:13:48.441 { 00:13:48.441 "name": null, 00:13:48.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.441 "is_configured": false, 00:13:48.441 "data_offset": 0, 00:13:48.441 "data_size": 7936 00:13:48.441 }, 00:13:48.441 { 00:13:48.441 "name": "BaseBdev2", 00:13:48.441 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:48.441 "is_configured": true, 00:13:48.441 "data_offset": 256, 00:13:48.441 "data_size": 7936 00:13:48.441 } 00:13:48.441 ] 00:13:48.441 }' 00:13:48.441 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:48.698 [2024-11-26 19:54:39.428447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.698 [2024-11-26 19:54:39.428612] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:48.698 [2024-11-26 19:54:39.428624] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:48.698 request: 00:13:48.698 { 00:13:48.698 "base_bdev": "BaseBdev1", 00:13:48.698 "raid_bdev": "raid_bdev1", 00:13:48.698 "method": "bdev_raid_add_base_bdev", 00:13:48.698 "req_id": 1 00:13:48.698 } 00:13:48.698 Got JSON-RPC error response 00:13:48.698 response: 00:13:48.698 { 00:13:48.698 "code": -22, 00:13:48.698 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:48.698 } 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:48.698 19:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.632 "name": "raid_bdev1", 00:13:49.632 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:49.632 "strip_size_kb": 0, 00:13:49.632 "state": "online", 00:13:49.632 "raid_level": "raid1", 00:13:49.632 "superblock": true, 00:13:49.632 "num_base_bdevs": 2, 00:13:49.632 "num_base_bdevs_discovered": 1, 00:13:49.632 "num_base_bdevs_operational": 1, 00:13:49.632 "base_bdevs_list": [ 00:13:49.632 { 00:13:49.632 "name": null, 00:13:49.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.632 "is_configured": false, 00:13:49.632 "data_offset": 0, 00:13:49.632 "data_size": 7936 00:13:49.632 }, 00:13:49.632 { 00:13:49.632 "name": "BaseBdev2", 00:13:49.632 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:49.632 "is_configured": true, 00:13:49.632 "data_offset": 256, 00:13:49.632 "data_size": 7936 00:13:49.632 } 00:13:49.632 ] 00:13:49.632 }' 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.632 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:49.890 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.890 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.890 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.890 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.890 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.890 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.890 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.890 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.890 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:49.890 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.890 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.890 "name": "raid_bdev1", 00:13:49.890 "uuid": "41cf006e-ff27-49a6-b75c-66ca1279d846", 00:13:49.890 "strip_size_kb": 0, 00:13:49.890 "state": "online", 00:13:49.890 "raid_level": "raid1", 00:13:49.890 "superblock": true, 00:13:49.890 "num_base_bdevs": 2, 00:13:49.890 "num_base_bdevs_discovered": 1, 00:13:49.890 "num_base_bdevs_operational": 1, 00:13:49.890 "base_bdevs_list": [ 00:13:49.890 { 00:13:49.890 "name": null, 00:13:49.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.890 "is_configured": false, 00:13:49.890 "data_offset": 0, 00:13:49.890 "data_size": 7936 00:13:49.890 }, 00:13:49.890 { 00:13:49.890 "name": "BaseBdev2", 00:13:49.890 "uuid": "be8392f4-aa81-50ab-8f64-33f72bc6df3b", 00:13:49.890 "is_configured": true, 00:13:49.890 "data_offset": 256, 00:13:49.890 "data_size": 7936 00:13:49.890 } 00:13:49.890 ] 00:13:49.890 }' 00:13:49.890 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.148 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.148 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.148 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:50.148 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 83975 00:13:50.148 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 83975 ']' 00:13:50.148 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 83975 00:13:50.148 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:13:50.148 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.148 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83975 00:13:50.148 killing process with pid 83975 00:13:50.148 Received shutdown signal, test time was about 60.000000 seconds 00:13:50.148 00:13:50.148 Latency(us) 00:13:50.148 [2024-11-26T19:54:41.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.148 [2024-11-26T19:54:41.085Z] =================================================================================================================== 00:13:50.148 [2024-11-26T19:54:41.085Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:50.148 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:50.148 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:50.148 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83975' 00:13:50.148 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 83975 00:13:50.148 [2024-11-26 19:54:40.882408] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.148 19:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 83975 00:13:50.148 [2024-11-26 19:54:40.882529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.148 [2024-11-26 19:54:40.882576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.148 [2024-11-26 19:54:40.882587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:50.148 [2024-11-26 19:54:41.039309] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.714 19:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:13:50.714 00:13:50.714 real 0m17.077s 00:13:50.714 user 0m21.742s 00:13:50.714 sys 0m1.906s 00:13:50.714 19:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.714 ************************************ 00:13:50.714 END TEST raid_rebuild_test_sb_4k 00:13:50.714 ************************************ 00:13:50.714 19:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:50.973 19:54:41 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:13:50.973 19:54:41 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:13:50.973 19:54:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:50.973 19:54:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.973 19:54:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.973 ************************************ 00:13:50.973 START TEST raid_state_function_test_sb_md_separate 00:13:50.973 ************************************ 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:50.973 Process raid pid: 84638 00:13:50.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=84638 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84638' 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 84638 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 84638 ']' 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:50.973 19:54:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:50.973 [2024-11-26 19:54:41.765262] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:13:50.974 [2024-11-26 19:54:41.765606] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.232 [2024-11-26 19:54:41.924595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.232 [2024-11-26 19:54:42.026280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.232 [2024-11-26 19:54:42.148871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.232 [2024-11-26 19:54:42.148899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:51.798 [2024-11-26 19:54:42.567384] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.798 [2024-11-26 19:54:42.567431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.798 [2024-11-26 19:54:42.567440] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.798 [2024-11-26 19:54:42.567448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.798 "name": "Existed_Raid", 00:13:51.798 "uuid": "23244f9e-c5e6-426b-a422-ff5565ee970b", 00:13:51.798 "strip_size_kb": 0, 00:13:51.798 "state": "configuring", 00:13:51.798 "raid_level": "raid1", 00:13:51.798 "superblock": true, 00:13:51.798 "num_base_bdevs": 2, 00:13:51.798 "num_base_bdevs_discovered": 0, 00:13:51.798 "num_base_bdevs_operational": 2, 00:13:51.798 "base_bdevs_list": [ 00:13:51.798 { 00:13:51.798 "name": "BaseBdev1", 00:13:51.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.798 "is_configured": false, 00:13:51.798 "data_offset": 0, 00:13:51.798 "data_size": 0 00:13:51.798 }, 00:13:51.798 { 00:13:51.798 "name": "BaseBdev2", 00:13:51.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.798 "is_configured": false, 00:13:51.798 "data_offset": 0, 00:13:51.798 "data_size": 0 00:13:51.798 } 00:13:51.798 ] 00:13:51.798 }' 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.798 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.057 [2024-11-26 19:54:42.879375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:52.057 [2024-11-26 19:54:42.879406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.057 [2024-11-26 19:54:42.887386] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.057 [2024-11-26 19:54:42.887491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.057 [2024-11-26 19:54:42.887542] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.057 [2024-11-26 19:54:42.887565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.057 [2024-11-26 19:54:42.917767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.057 BaseBdev1 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.057 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.057 [ 00:13:52.057 { 00:13:52.057 "name": "BaseBdev1", 00:13:52.057 "aliases": [ 00:13:52.057 "79b00072-506a-4a40-a888-de7960f9bcaa" 00:13:52.057 ], 00:13:52.057 "product_name": "Malloc disk", 00:13:52.057 "block_size": 4096, 00:13:52.057 "num_blocks": 8192, 00:13:52.057 "uuid": "79b00072-506a-4a40-a888-de7960f9bcaa", 00:13:52.057 "md_size": 32, 00:13:52.057 "md_interleave": false, 00:13:52.057 "dif_type": 0, 00:13:52.057 "assigned_rate_limits": { 00:13:52.057 "rw_ios_per_sec": 0, 00:13:52.057 "rw_mbytes_per_sec": 0, 00:13:52.057 "r_mbytes_per_sec": 0, 00:13:52.057 "w_mbytes_per_sec": 0 00:13:52.057 }, 00:13:52.057 "claimed": true, 00:13:52.057 "claim_type": "exclusive_write", 00:13:52.057 "zoned": false, 00:13:52.057 "supported_io_types": { 00:13:52.057 "read": true, 00:13:52.057 "write": true, 00:13:52.057 "unmap": true, 00:13:52.057 "flush": true, 00:13:52.057 "reset": true, 00:13:52.057 "nvme_admin": false, 00:13:52.057 "nvme_io": false, 00:13:52.057 "nvme_io_md": false, 00:13:52.057 "write_zeroes": true, 00:13:52.057 "zcopy": true, 00:13:52.057 "get_zone_info": false, 00:13:52.057 "zone_management": false, 00:13:52.057 "zone_append": false, 00:13:52.057 "compare": false, 00:13:52.057 "compare_and_write": false, 00:13:52.057 "abort": true, 00:13:52.057 "seek_hole": false, 00:13:52.057 "seek_data": false, 00:13:52.057 "copy": true, 00:13:52.058 "nvme_iov_md": false 00:13:52.058 }, 00:13:52.058 "memory_domains": [ 00:13:52.058 { 00:13:52.058 "dma_device_id": "system", 00:13:52.058 "dma_device_type": 1 00:13:52.058 }, 00:13:52.058 { 00:13:52.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.058 "dma_device_type": 2 00:13:52.058 } 00:13:52.058 ], 00:13:52.058 "driver_specific": {} 00:13:52.058 } 00:13:52.058 ] 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.058 "name": "Existed_Raid", 00:13:52.058 "uuid": "b06400a9-0d94-449f-ae97-e520ce08f827", 00:13:52.058 "strip_size_kb": 0, 00:13:52.058 "state": "configuring", 00:13:52.058 "raid_level": "raid1", 00:13:52.058 "superblock": true, 00:13:52.058 "num_base_bdevs": 2, 00:13:52.058 "num_base_bdevs_discovered": 1, 00:13:52.058 "num_base_bdevs_operational": 2, 00:13:52.058 "base_bdevs_list": [ 00:13:52.058 { 00:13:52.058 "name": "BaseBdev1", 00:13:52.058 "uuid": "79b00072-506a-4a40-a888-de7960f9bcaa", 00:13:52.058 "is_configured": true, 00:13:52.058 "data_offset": 256, 00:13:52.058 "data_size": 7936 00:13:52.058 }, 00:13:52.058 { 00:13:52.058 "name": "BaseBdev2", 00:13:52.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.058 "is_configured": false, 00:13:52.058 "data_offset": 0, 00:13:52.058 "data_size": 0 00:13:52.058 } 00:13:52.058 ] 00:13:52.058 }' 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.058 19:54:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.623 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:52.623 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.623 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.623 [2024-11-26 19:54:43.277907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:52.623 [2024-11-26 19:54:43.277953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:52.623 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.624 [2024-11-26 19:54:43.285947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.624 [2024-11-26 19:54:43.287606] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.624 [2024-11-26 19:54:43.287644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.624 "name": "Existed_Raid", 00:13:52.624 "uuid": "9066659c-19d0-4ff2-b7af-0728d6f31e11", 00:13:52.624 "strip_size_kb": 0, 00:13:52.624 "state": "configuring", 00:13:52.624 "raid_level": "raid1", 00:13:52.624 "superblock": true, 00:13:52.624 "num_base_bdevs": 2, 00:13:52.624 "num_base_bdevs_discovered": 1, 00:13:52.624 "num_base_bdevs_operational": 2, 00:13:52.624 "base_bdevs_list": [ 00:13:52.624 { 00:13:52.624 "name": "BaseBdev1", 00:13:52.624 "uuid": "79b00072-506a-4a40-a888-de7960f9bcaa", 00:13:52.624 "is_configured": true, 00:13:52.624 "data_offset": 256, 00:13:52.624 "data_size": 7936 00:13:52.624 }, 00:13:52.624 { 00:13:52.624 "name": "BaseBdev2", 00:13:52.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.624 "is_configured": false, 00:13:52.624 "data_offset": 0, 00:13:52.624 "data_size": 0 00:13:52.624 } 00:13:52.624 ] 00:13:52.624 }' 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.624 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.882 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:13:52.882 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.882 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.882 [2024-11-26 19:54:43.642721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.882 [2024-11-26 19:54:43.642926] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:52.882 [2024-11-26 19:54:43.642939] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:13:52.882 [2024-11-26 19:54:43.643017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:52.882 [2024-11-26 19:54:43.643119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:52.882 [2024-11-26 19:54:43.643128] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:52.882 [2024-11-26 19:54:43.643197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.882 BaseBdev2 00:13:52.882 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.882 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:52.882 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:52.882 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.882 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.883 [ 00:13:52.883 { 00:13:52.883 "name": "BaseBdev2", 00:13:52.883 "aliases": [ 00:13:52.883 "b46cab6a-e06a-49ed-9c29-13195fa829da" 00:13:52.883 ], 00:13:52.883 "product_name": "Malloc disk", 00:13:52.883 "block_size": 4096, 00:13:52.883 "num_blocks": 8192, 00:13:52.883 "uuid": "b46cab6a-e06a-49ed-9c29-13195fa829da", 00:13:52.883 "md_size": 32, 00:13:52.883 "md_interleave": false, 00:13:52.883 "dif_type": 0, 00:13:52.883 "assigned_rate_limits": { 00:13:52.883 "rw_ios_per_sec": 0, 00:13:52.883 "rw_mbytes_per_sec": 0, 00:13:52.883 "r_mbytes_per_sec": 0, 00:13:52.883 "w_mbytes_per_sec": 0 00:13:52.883 }, 00:13:52.883 "claimed": true, 00:13:52.883 "claim_type": "exclusive_write", 00:13:52.883 "zoned": false, 00:13:52.883 "supported_io_types": { 00:13:52.883 "read": true, 00:13:52.883 "write": true, 00:13:52.883 "unmap": true, 00:13:52.883 "flush": true, 00:13:52.883 "reset": true, 00:13:52.883 "nvme_admin": false, 00:13:52.883 "nvme_io": false, 00:13:52.883 "nvme_io_md": false, 00:13:52.883 "write_zeroes": true, 00:13:52.883 "zcopy": true, 00:13:52.883 "get_zone_info": false, 00:13:52.883 "zone_management": false, 00:13:52.883 "zone_append": false, 00:13:52.883 "compare": false, 00:13:52.883 "compare_and_write": false, 00:13:52.883 "abort": true, 00:13:52.883 "seek_hole": false, 00:13:52.883 "seek_data": false, 00:13:52.883 "copy": true, 00:13:52.883 "nvme_iov_md": false 00:13:52.883 }, 00:13:52.883 "memory_domains": [ 00:13:52.883 { 00:13:52.883 "dma_device_id": "system", 00:13:52.883 "dma_device_type": 1 00:13:52.883 }, 00:13:52.883 { 00:13:52.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.883 "dma_device_type": 2 00:13:52.883 } 00:13:52.883 ], 00:13:52.883 "driver_specific": {} 00:13:52.883 } 00:13:52.883 ] 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.883 "name": "Existed_Raid", 00:13:52.883 "uuid": "9066659c-19d0-4ff2-b7af-0728d6f31e11", 00:13:52.883 "strip_size_kb": 0, 00:13:52.883 "state": "online", 00:13:52.883 "raid_level": "raid1", 00:13:52.883 "superblock": true, 00:13:52.883 "num_base_bdevs": 2, 00:13:52.883 "num_base_bdevs_discovered": 2, 00:13:52.883 "num_base_bdevs_operational": 2, 00:13:52.883 "base_bdevs_list": [ 00:13:52.883 { 00:13:52.883 "name": "BaseBdev1", 00:13:52.883 "uuid": "79b00072-506a-4a40-a888-de7960f9bcaa", 00:13:52.883 "is_configured": true, 00:13:52.883 "data_offset": 256, 00:13:52.883 "data_size": 7936 00:13:52.883 }, 00:13:52.883 { 00:13:52.883 "name": "BaseBdev2", 00:13:52.883 "uuid": "b46cab6a-e06a-49ed-9c29-13195fa829da", 00:13:52.883 "is_configured": true, 00:13:52.883 "data_offset": 256, 00:13:52.883 "data_size": 7936 00:13:52.883 } 00:13:52.883 ] 00:13:52.883 }' 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.883 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:53.141 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:53.141 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:53.141 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:53.141 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:53.141 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:13:53.141 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:53.141 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:53.141 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.142 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.142 19:54:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:53.142 [2024-11-26 19:54:43.999131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.142 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.142 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.142 "name": "Existed_Raid", 00:13:53.142 "aliases": [ 00:13:53.142 "9066659c-19d0-4ff2-b7af-0728d6f31e11" 00:13:53.142 ], 00:13:53.142 "product_name": "Raid Volume", 00:13:53.142 "block_size": 4096, 00:13:53.142 "num_blocks": 7936, 00:13:53.142 "uuid": "9066659c-19d0-4ff2-b7af-0728d6f31e11", 00:13:53.142 "md_size": 32, 00:13:53.142 "md_interleave": false, 00:13:53.142 "dif_type": 0, 00:13:53.142 "assigned_rate_limits": { 00:13:53.142 "rw_ios_per_sec": 0, 00:13:53.142 "rw_mbytes_per_sec": 0, 00:13:53.142 "r_mbytes_per_sec": 0, 00:13:53.142 "w_mbytes_per_sec": 0 00:13:53.142 }, 00:13:53.142 "claimed": false, 00:13:53.142 "zoned": false, 00:13:53.142 "supported_io_types": { 00:13:53.142 "read": true, 00:13:53.142 "write": true, 00:13:53.142 "unmap": false, 00:13:53.142 "flush": false, 00:13:53.142 "reset": true, 00:13:53.142 "nvme_admin": false, 00:13:53.142 "nvme_io": false, 00:13:53.142 "nvme_io_md": false, 00:13:53.142 "write_zeroes": true, 00:13:53.142 "zcopy": false, 00:13:53.142 "get_zone_info": false, 00:13:53.142 "zone_management": false, 00:13:53.142 "zone_append": false, 00:13:53.142 "compare": false, 00:13:53.142 "compare_and_write": false, 00:13:53.142 "abort": false, 00:13:53.142 "seek_hole": false, 00:13:53.142 "seek_data": false, 00:13:53.142 "copy": false, 00:13:53.142 "nvme_iov_md": false 00:13:53.142 }, 00:13:53.142 "memory_domains": [ 00:13:53.142 { 00:13:53.142 "dma_device_id": "system", 00:13:53.142 "dma_device_type": 1 00:13:53.142 }, 00:13:53.142 { 00:13:53.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.142 "dma_device_type": 2 00:13:53.142 }, 00:13:53.142 { 00:13:53.142 "dma_device_id": "system", 00:13:53.142 "dma_device_type": 1 00:13:53.142 }, 00:13:53.142 { 00:13:53.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.142 "dma_device_type": 2 00:13:53.142 } 00:13:53.142 ], 00:13:53.142 "driver_specific": { 00:13:53.142 "raid": { 00:13:53.142 "uuid": "9066659c-19d0-4ff2-b7af-0728d6f31e11", 00:13:53.142 "strip_size_kb": 0, 00:13:53.142 "state": "online", 00:13:53.142 "raid_level": "raid1", 00:13:53.142 "superblock": true, 00:13:53.142 "num_base_bdevs": 2, 00:13:53.142 "num_base_bdevs_discovered": 2, 00:13:53.142 "num_base_bdevs_operational": 2, 00:13:53.142 "base_bdevs_list": [ 00:13:53.142 { 00:13:53.142 "name": "BaseBdev1", 00:13:53.142 "uuid": "79b00072-506a-4a40-a888-de7960f9bcaa", 00:13:53.142 "is_configured": true, 00:13:53.142 "data_offset": 256, 00:13:53.142 "data_size": 7936 00:13:53.142 }, 00:13:53.142 { 00:13:53.142 "name": "BaseBdev2", 00:13:53.142 "uuid": "b46cab6a-e06a-49ed-9c29-13195fa829da", 00:13:53.142 "is_configured": true, 00:13:53.142 "data_offset": 256, 00:13:53.142 "data_size": 7936 00:13:53.142 } 00:13:53.142 ] 00:13:53.142 } 00:13:53.142 } 00:13:53.142 }' 00:13:53.142 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.142 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:53.142 BaseBdev2' 00:13:53.142 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.400 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:53.401 [2024-11-26 19:54:44.166885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.401 "name": "Existed_Raid", 00:13:53.401 "uuid": "9066659c-19d0-4ff2-b7af-0728d6f31e11", 00:13:53.401 "strip_size_kb": 0, 00:13:53.401 "state": "online", 00:13:53.401 "raid_level": "raid1", 00:13:53.401 "superblock": true, 00:13:53.401 "num_base_bdevs": 2, 00:13:53.401 "num_base_bdevs_discovered": 1, 00:13:53.401 "num_base_bdevs_operational": 1, 00:13:53.401 "base_bdevs_list": [ 00:13:53.401 { 00:13:53.401 "name": null, 00:13:53.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.401 "is_configured": false, 00:13:53.401 "data_offset": 0, 00:13:53.401 "data_size": 7936 00:13:53.401 }, 00:13:53.401 { 00:13:53.401 "name": "BaseBdev2", 00:13:53.401 "uuid": "b46cab6a-e06a-49ed-9c29-13195fa829da", 00:13:53.401 "is_configured": true, 00:13:53.401 "data_offset": 256, 00:13:53.401 "data_size": 7936 00:13:53.401 } 00:13:53.401 ] 00:13:53.401 }' 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.401 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:53.659 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:53.659 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:53.659 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.659 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.659 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:53.659 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:53.659 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.659 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:53.659 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:53.659 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:53.659 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.659 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:53.659 [2024-11-26 19:54:44.582840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:53.659 [2024-11-26 19:54:44.582935] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.917 [2024-11-26 19:54:44.635542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.917 [2024-11-26 19:54:44.635586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.917 [2024-11-26 19:54:44.635596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:53.917 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.917 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:53.917 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:53.917 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.917 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.917 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 84638 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 84638 ']' 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 84638 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84638 00:13:53.918 killing process with pid 84638 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84638' 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 84638 00:13:53.918 [2024-11-26 19:54:44.695547] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.918 19:54:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 84638 00:13:53.918 [2024-11-26 19:54:44.704180] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.484 19:54:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:13:54.484 00:13:54.484 real 0m3.604s 00:13:54.484 user 0m5.221s 00:13:54.484 sys 0m0.595s 00:13:54.484 19:54:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.484 ************************************ 00:13:54.484 END TEST raid_state_function_test_sb_md_separate 00:13:54.484 ************************************ 00:13:54.484 19:54:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:54.484 19:54:45 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:13:54.484 19:54:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:54.484 19:54:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.484 19:54:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.484 ************************************ 00:13:54.484 START TEST raid_superblock_test_md_separate 00:13:54.484 ************************************ 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:54.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=84874 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 84874 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 84874 ']' 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:54.484 19:54:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:54.484 [2024-11-26 19:54:45.407278] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:13:54.484 [2024-11-26 19:54:45.407588] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84874 ] 00:13:54.742 [2024-11-26 19:54:45.561397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.742 [2024-11-26 19:54:45.652871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.000 [2024-11-26 19:54:45.770923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.000 [2024-11-26 19:54:45.771134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:55.566 malloc1 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:55.566 [2024-11-26 19:54:46.278914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:55.566 [2024-11-26 19:54:46.278978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.566 [2024-11-26 19:54:46.278998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:55.566 [2024-11-26 19:54:46.279007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.566 [2024-11-26 19:54:46.280693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.566 [2024-11-26 19:54:46.280723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:55.566 pt1 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:55.566 malloc2 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:55.566 [2024-11-26 19:54:46.316819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:55.566 [2024-11-26 19:54:46.316861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.566 [2024-11-26 19:54:46.316877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:55.566 [2024-11-26 19:54:46.316885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.566 [2024-11-26 19:54:46.318505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.566 [2024-11-26 19:54:46.318637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:55.566 pt2 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:55.566 [2024-11-26 19:54:46.324851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:55.566 [2024-11-26 19:54:46.326455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:55.566 [2024-11-26 19:54:46.326598] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:55.566 [2024-11-26 19:54:46.326610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:13:55.566 [2024-11-26 19:54:46.326666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:55.566 [2024-11-26 19:54:46.326759] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:55.566 [2024-11-26 19:54:46.326769] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:55.566 [2024-11-26 19:54:46.326842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.566 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.566 "name": "raid_bdev1", 00:13:55.566 "uuid": "765c9d79-470d-49dd-b61b-094956b8b039", 00:13:55.566 "strip_size_kb": 0, 00:13:55.566 "state": "online", 00:13:55.566 "raid_level": "raid1", 00:13:55.566 "superblock": true, 00:13:55.566 "num_base_bdevs": 2, 00:13:55.566 "num_base_bdevs_discovered": 2, 00:13:55.566 "num_base_bdevs_operational": 2, 00:13:55.566 "base_bdevs_list": [ 00:13:55.566 { 00:13:55.566 "name": "pt1", 00:13:55.566 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:55.566 "is_configured": true, 00:13:55.566 "data_offset": 256, 00:13:55.566 "data_size": 7936 00:13:55.566 }, 00:13:55.566 { 00:13:55.567 "name": "pt2", 00:13:55.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.567 "is_configured": true, 00:13:55.567 "data_offset": 256, 00:13:55.567 "data_size": 7936 00:13:55.567 } 00:13:55.567 ] 00:13:55.567 }' 00:13:55.567 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.567 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:55.825 [2024-11-26 19:54:46.661192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:55.825 "name": "raid_bdev1", 00:13:55.825 "aliases": [ 00:13:55.825 "765c9d79-470d-49dd-b61b-094956b8b039" 00:13:55.825 ], 00:13:55.825 "product_name": "Raid Volume", 00:13:55.825 "block_size": 4096, 00:13:55.825 "num_blocks": 7936, 00:13:55.825 "uuid": "765c9d79-470d-49dd-b61b-094956b8b039", 00:13:55.825 "md_size": 32, 00:13:55.825 "md_interleave": false, 00:13:55.825 "dif_type": 0, 00:13:55.825 "assigned_rate_limits": { 00:13:55.825 "rw_ios_per_sec": 0, 00:13:55.825 "rw_mbytes_per_sec": 0, 00:13:55.825 "r_mbytes_per_sec": 0, 00:13:55.825 "w_mbytes_per_sec": 0 00:13:55.825 }, 00:13:55.825 "claimed": false, 00:13:55.825 "zoned": false, 00:13:55.825 "supported_io_types": { 00:13:55.825 "read": true, 00:13:55.825 "write": true, 00:13:55.825 "unmap": false, 00:13:55.825 "flush": false, 00:13:55.825 "reset": true, 00:13:55.825 "nvme_admin": false, 00:13:55.825 "nvme_io": false, 00:13:55.825 "nvme_io_md": false, 00:13:55.825 "write_zeroes": true, 00:13:55.825 "zcopy": false, 00:13:55.825 "get_zone_info": false, 00:13:55.825 "zone_management": false, 00:13:55.825 "zone_append": false, 00:13:55.825 "compare": false, 00:13:55.825 "compare_and_write": false, 00:13:55.825 "abort": false, 00:13:55.825 "seek_hole": false, 00:13:55.825 "seek_data": false, 00:13:55.825 "copy": false, 00:13:55.825 "nvme_iov_md": false 00:13:55.825 }, 00:13:55.825 "memory_domains": [ 00:13:55.825 { 00:13:55.825 "dma_device_id": "system", 00:13:55.825 "dma_device_type": 1 00:13:55.825 }, 00:13:55.825 { 00:13:55.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.825 "dma_device_type": 2 00:13:55.825 }, 00:13:55.825 { 00:13:55.825 "dma_device_id": "system", 00:13:55.825 "dma_device_type": 1 00:13:55.825 }, 00:13:55.825 { 00:13:55.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.825 "dma_device_type": 2 00:13:55.825 } 00:13:55.825 ], 00:13:55.825 "driver_specific": { 00:13:55.825 "raid": { 00:13:55.825 "uuid": "765c9d79-470d-49dd-b61b-094956b8b039", 00:13:55.825 "strip_size_kb": 0, 00:13:55.825 "state": "online", 00:13:55.825 "raid_level": "raid1", 00:13:55.825 "superblock": true, 00:13:55.825 "num_base_bdevs": 2, 00:13:55.825 "num_base_bdevs_discovered": 2, 00:13:55.825 "num_base_bdevs_operational": 2, 00:13:55.825 "base_bdevs_list": [ 00:13:55.825 { 00:13:55.825 "name": "pt1", 00:13:55.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:55.825 "is_configured": true, 00:13:55.825 "data_offset": 256, 00:13:55.825 "data_size": 7936 00:13:55.825 }, 00:13:55.825 { 00:13:55.825 "name": "pt2", 00:13:55.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.825 "is_configured": true, 00:13:55.825 "data_offset": 256, 00:13:55.825 "data_size": 7936 00:13:55.825 } 00:13:55.825 ] 00:13:55.825 } 00:13:55.825 } 00:13:55.825 }' 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:55.825 pt2' 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:13:55.825 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.826 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:55.826 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.826 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:55.826 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:56.084 [2024-11-26 19:54:46.821132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=765c9d79-470d-49dd-b61b-094956b8b039 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 765c9d79-470d-49dd-b61b-094956b8b039 ']' 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.084 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.084 [2024-11-26 19:54:46.852904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:56.085 [2024-11-26 19:54:46.852922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:56.085 [2024-11-26 19:54:46.852987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.085 [2024-11-26 19:54:46.853040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.085 [2024-11-26 19:54:46.853051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.085 [2024-11-26 19:54:46.956940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:56.085 [2024-11-26 19:54:46.958614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:56.085 [2024-11-26 19:54:46.958675] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:56.085 [2024-11-26 19:54:46.958717] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:56.085 [2024-11-26 19:54:46.958729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:56.085 [2024-11-26 19:54:46.958737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:56.085 request: 00:13:56.085 { 00:13:56.085 "name": "raid_bdev1", 00:13:56.085 "raid_level": "raid1", 00:13:56.085 "base_bdevs": [ 00:13:56.085 "malloc1", 00:13:56.085 "malloc2" 00:13:56.085 ], 00:13:56.085 "superblock": false, 00:13:56.085 "method": "bdev_raid_create", 00:13:56.085 "req_id": 1 00:13:56.085 } 00:13:56.085 Got JSON-RPC error response 00:13:56.085 response: 00:13:56.085 { 00:13:56.085 "code": -17, 00:13:56.085 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:56.085 } 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.085 19:54:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.085 [2024-11-26 19:54:47.004933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:56.085 [2024-11-26 19:54:47.004972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.085 [2024-11-26 19:54:47.004984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:56.085 [2024-11-26 19:54:47.004993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.085 [2024-11-26 19:54:47.006688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.085 [2024-11-26 19:54:47.006717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:56.085 [2024-11-26 19:54:47.006752] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:56.085 [2024-11-26 19:54:47.006791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:56.085 pt1 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.085 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.343 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.343 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.343 "name": "raid_bdev1", 00:13:56.343 "uuid": "765c9d79-470d-49dd-b61b-094956b8b039", 00:13:56.343 "strip_size_kb": 0, 00:13:56.343 "state": "configuring", 00:13:56.343 "raid_level": "raid1", 00:13:56.343 "superblock": true, 00:13:56.343 "num_base_bdevs": 2, 00:13:56.343 "num_base_bdevs_discovered": 1, 00:13:56.343 "num_base_bdevs_operational": 2, 00:13:56.343 "base_bdevs_list": [ 00:13:56.343 { 00:13:56.343 "name": "pt1", 00:13:56.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:56.343 "is_configured": true, 00:13:56.343 "data_offset": 256, 00:13:56.343 "data_size": 7936 00:13:56.343 }, 00:13:56.343 { 00:13:56.343 "name": null, 00:13:56.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.343 "is_configured": false, 00:13:56.343 "data_offset": 256, 00:13:56.343 "data_size": 7936 00:13:56.343 } 00:13:56.343 ] 00:13:56.343 }' 00:13:56.343 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.343 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.602 [2024-11-26 19:54:47.325021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:56.602 [2024-11-26 19:54:47.325085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.602 [2024-11-26 19:54:47.325102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:56.602 [2024-11-26 19:54:47.325111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.602 [2024-11-26 19:54:47.325303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.602 [2024-11-26 19:54:47.325317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:56.602 [2024-11-26 19:54:47.325369] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:56.602 [2024-11-26 19:54:47.325387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:56.602 [2024-11-26 19:54:47.325475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:56.602 [2024-11-26 19:54:47.325484] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:13:56.602 [2024-11-26 19:54:47.325541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:56.602 [2024-11-26 19:54:47.325622] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:56.602 [2024-11-26 19:54:47.325633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:56.602 [2024-11-26 19:54:47.325705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.602 pt2 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.602 "name": "raid_bdev1", 00:13:56.602 "uuid": "765c9d79-470d-49dd-b61b-094956b8b039", 00:13:56.602 "strip_size_kb": 0, 00:13:56.602 "state": "online", 00:13:56.602 "raid_level": "raid1", 00:13:56.602 "superblock": true, 00:13:56.602 "num_base_bdevs": 2, 00:13:56.602 "num_base_bdevs_discovered": 2, 00:13:56.602 "num_base_bdevs_operational": 2, 00:13:56.602 "base_bdevs_list": [ 00:13:56.602 { 00:13:56.602 "name": "pt1", 00:13:56.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:56.602 "is_configured": true, 00:13:56.602 "data_offset": 256, 00:13:56.602 "data_size": 7936 00:13:56.602 }, 00:13:56.602 { 00:13:56.602 "name": "pt2", 00:13:56.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.602 "is_configured": true, 00:13:56.602 "data_offset": 256, 00:13:56.602 "data_size": 7936 00:13:56.602 } 00:13:56.602 ] 00:13:56.602 }' 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.602 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.860 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:56.860 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:56.860 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:56.860 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:56.860 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:13:56.860 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:56.860 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:56.860 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.860 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.860 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:56.860 [2024-11-26 19:54:47.637311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.860 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.860 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:56.860 "name": "raid_bdev1", 00:13:56.860 "aliases": [ 00:13:56.860 "765c9d79-470d-49dd-b61b-094956b8b039" 00:13:56.860 ], 00:13:56.860 "product_name": "Raid Volume", 00:13:56.860 "block_size": 4096, 00:13:56.860 "num_blocks": 7936, 00:13:56.860 "uuid": "765c9d79-470d-49dd-b61b-094956b8b039", 00:13:56.860 "md_size": 32, 00:13:56.860 "md_interleave": false, 00:13:56.860 "dif_type": 0, 00:13:56.860 "assigned_rate_limits": { 00:13:56.860 "rw_ios_per_sec": 0, 00:13:56.860 "rw_mbytes_per_sec": 0, 00:13:56.860 "r_mbytes_per_sec": 0, 00:13:56.860 "w_mbytes_per_sec": 0 00:13:56.860 }, 00:13:56.860 "claimed": false, 00:13:56.860 "zoned": false, 00:13:56.860 "supported_io_types": { 00:13:56.860 "read": true, 00:13:56.860 "write": true, 00:13:56.860 "unmap": false, 00:13:56.861 "flush": false, 00:13:56.861 "reset": true, 00:13:56.861 "nvme_admin": false, 00:13:56.861 "nvme_io": false, 00:13:56.861 "nvme_io_md": false, 00:13:56.861 "write_zeroes": true, 00:13:56.861 "zcopy": false, 00:13:56.861 "get_zone_info": false, 00:13:56.861 "zone_management": false, 00:13:56.861 "zone_append": false, 00:13:56.861 "compare": false, 00:13:56.861 "compare_and_write": false, 00:13:56.861 "abort": false, 00:13:56.861 "seek_hole": false, 00:13:56.861 "seek_data": false, 00:13:56.861 "copy": false, 00:13:56.861 "nvme_iov_md": false 00:13:56.861 }, 00:13:56.861 "memory_domains": [ 00:13:56.861 { 00:13:56.861 "dma_device_id": "system", 00:13:56.861 "dma_device_type": 1 00:13:56.861 }, 00:13:56.861 { 00:13:56.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.861 "dma_device_type": 2 00:13:56.861 }, 00:13:56.861 { 00:13:56.861 "dma_device_id": "system", 00:13:56.861 "dma_device_type": 1 00:13:56.861 }, 00:13:56.861 { 00:13:56.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.861 "dma_device_type": 2 00:13:56.861 } 00:13:56.861 ], 00:13:56.861 "driver_specific": { 00:13:56.861 "raid": { 00:13:56.861 "uuid": "765c9d79-470d-49dd-b61b-094956b8b039", 00:13:56.861 "strip_size_kb": 0, 00:13:56.861 "state": "online", 00:13:56.861 "raid_level": "raid1", 00:13:56.861 "superblock": true, 00:13:56.861 "num_base_bdevs": 2, 00:13:56.861 "num_base_bdevs_discovered": 2, 00:13:56.861 "num_base_bdevs_operational": 2, 00:13:56.861 "base_bdevs_list": [ 00:13:56.861 { 00:13:56.861 "name": "pt1", 00:13:56.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:56.861 "is_configured": true, 00:13:56.861 "data_offset": 256, 00:13:56.861 "data_size": 7936 00:13:56.861 }, 00:13:56.861 { 00:13:56.861 "name": "pt2", 00:13:56.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.861 "is_configured": true, 00:13:56.861 "data_offset": 256, 00:13:56.861 "data_size": 7936 00:13:56.861 } 00:13:56.861 ] 00:13:56.861 } 00:13:56.861 } 00:13:56.861 }' 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:56.861 pt2' 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:56.861 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:57.119 [2024-11-26 19:54:47.797302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 765c9d79-470d-49dd-b61b-094956b8b039 '!=' 765c9d79-470d-49dd-b61b-094956b8b039 ']' 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:57.119 [2024-11-26 19:54:47.829112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.119 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:57.120 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.120 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.120 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.120 "name": "raid_bdev1", 00:13:57.120 "uuid": "765c9d79-470d-49dd-b61b-094956b8b039", 00:13:57.120 "strip_size_kb": 0, 00:13:57.120 "state": "online", 00:13:57.120 "raid_level": "raid1", 00:13:57.120 "superblock": true, 00:13:57.120 "num_base_bdevs": 2, 00:13:57.120 "num_base_bdevs_discovered": 1, 00:13:57.120 "num_base_bdevs_operational": 1, 00:13:57.120 "base_bdevs_list": [ 00:13:57.120 { 00:13:57.120 "name": null, 00:13:57.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.120 "is_configured": false, 00:13:57.120 "data_offset": 0, 00:13:57.120 "data_size": 7936 00:13:57.120 }, 00:13:57.120 { 00:13:57.120 "name": "pt2", 00:13:57.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:57.120 "is_configured": true, 00:13:57.120 "data_offset": 256, 00:13:57.120 "data_size": 7936 00:13:57.120 } 00:13:57.120 ] 00:13:57.120 }' 00:13:57.120 19:54:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.120 19:54:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:57.378 [2024-11-26 19:54:48.173136] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.378 [2024-11-26 19:54:48.173155] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.378 [2024-11-26 19:54:48.173206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.378 [2024-11-26 19:54:48.173245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.378 [2024-11-26 19:54:48.173254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.378 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:57.378 [2024-11-26 19:54:48.221161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:57.378 [2024-11-26 19:54:48.221207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.378 [2024-11-26 19:54:48.221220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:57.378 [2024-11-26 19:54:48.221228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.378 [2024-11-26 19:54:48.222985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.378 [2024-11-26 19:54:48.223015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:57.378 [2024-11-26 19:54:48.223058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:57.378 [2024-11-26 19:54:48.223096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:57.378 [2024-11-26 19:54:48.223167] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:57.378 [2024-11-26 19:54:48.223178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:13:57.379 [2024-11-26 19:54:48.223239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:57.379 [2024-11-26 19:54:48.223321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:57.379 [2024-11-26 19:54:48.223328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:57.379 [2024-11-26 19:54:48.223410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.379 pt2 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.379 "name": "raid_bdev1", 00:13:57.379 "uuid": "765c9d79-470d-49dd-b61b-094956b8b039", 00:13:57.379 "strip_size_kb": 0, 00:13:57.379 "state": "online", 00:13:57.379 "raid_level": "raid1", 00:13:57.379 "superblock": true, 00:13:57.379 "num_base_bdevs": 2, 00:13:57.379 "num_base_bdevs_discovered": 1, 00:13:57.379 "num_base_bdevs_operational": 1, 00:13:57.379 "base_bdevs_list": [ 00:13:57.379 { 00:13:57.379 "name": null, 00:13:57.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.379 "is_configured": false, 00:13:57.379 "data_offset": 256, 00:13:57.379 "data_size": 7936 00:13:57.379 }, 00:13:57.379 { 00:13:57.379 "name": "pt2", 00:13:57.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:57.379 "is_configured": true, 00:13:57.379 "data_offset": 256, 00:13:57.379 "data_size": 7936 00:13:57.379 } 00:13:57.379 ] 00:13:57.379 }' 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.379 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:57.637 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:57.637 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.637 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:57.637 [2024-11-26 19:54:48.541227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.637 [2024-11-26 19:54:48.541255] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.637 [2024-11-26 19:54:48.541321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.637 [2024-11-26 19:54:48.541385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.637 [2024-11-26 19:54:48.541393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:57.637 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.637 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.637 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:57.637 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.637 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:57.637 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:57.895 [2024-11-26 19:54:48.585240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:57.895 [2024-11-26 19:54:48.585289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.895 [2024-11-26 19:54:48.585305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:57.895 [2024-11-26 19:54:48.585314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.895 [2024-11-26 19:54:48.587087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.895 [2024-11-26 19:54:48.587116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:57.895 [2024-11-26 19:54:48.587163] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:57.895 [2024-11-26 19:54:48.587199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:57.895 [2024-11-26 19:54:48.587300] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:57.895 [2024-11-26 19:54:48.587309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.895 [2024-11-26 19:54:48.587322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:57.895 [2024-11-26 19:54:48.587377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:57.895 [2024-11-26 19:54:48.587433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:57.895 [2024-11-26 19:54:48.587440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:13:57.895 [2024-11-26 19:54:48.587498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:57.895 [2024-11-26 19:54:48.587776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:57.895 [2024-11-26 19:54:48.587790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:57.895 [2024-11-26 19:54:48.587876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.895 pt1 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.895 "name": "raid_bdev1", 00:13:57.895 "uuid": "765c9d79-470d-49dd-b61b-094956b8b039", 00:13:57.895 "strip_size_kb": 0, 00:13:57.895 "state": "online", 00:13:57.895 "raid_level": "raid1", 00:13:57.895 "superblock": true, 00:13:57.895 "num_base_bdevs": 2, 00:13:57.895 "num_base_bdevs_discovered": 1, 00:13:57.895 "num_base_bdevs_operational": 1, 00:13:57.895 "base_bdevs_list": [ 00:13:57.895 { 00:13:57.895 "name": null, 00:13:57.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.895 "is_configured": false, 00:13:57.895 "data_offset": 256, 00:13:57.895 "data_size": 7936 00:13:57.895 }, 00:13:57.895 { 00:13:57.895 "name": "pt2", 00:13:57.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:57.895 "is_configured": true, 00:13:57.895 "data_offset": 256, 00:13:57.895 "data_size": 7936 00:13:57.895 } 00:13:57.895 ] 00:13:57.895 }' 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.895 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:58.153 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:58.153 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.153 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:58.153 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:58.153 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.153 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:58.153 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:58.153 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:58.154 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.154 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:58.154 [2024-11-26 19:54:48.953517] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.154 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.154 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 765c9d79-470d-49dd-b61b-094956b8b039 '!=' 765c9d79-470d-49dd-b61b-094956b8b039 ']' 00:13:58.154 19:54:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 84874 00:13:58.154 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 84874 ']' 00:13:58.154 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 84874 00:13:58.154 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:13:58.154 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.154 19:54:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84874 00:13:58.154 19:54:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.154 19:54:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.154 19:54:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84874' 00:13:58.154 killing process with pid 84874 00:13:58.154 19:54:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 84874 00:13:58.154 [2024-11-26 19:54:49.010280] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.154 19:54:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 84874 00:13:58.154 [2024-11-26 19:54:49.010373] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.154 [2024-11-26 19:54:49.010432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.154 [2024-11-26 19:54:49.010447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:58.412 [2024-11-26 19:54:49.122960] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.978 ************************************ 00:13:58.978 END TEST raid_superblock_test_md_separate 00:13:58.978 ************************************ 00:13:58.978 19:54:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:13:58.978 00:13:58.978 real 0m4.371s 00:13:58.978 user 0m6.682s 00:13:58.978 sys 0m0.744s 00:13:58.978 19:54:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.978 19:54:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:58.978 19:54:49 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:13:58.978 19:54:49 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:13:58.978 19:54:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:58.978 19:54:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.978 19:54:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.978 ************************************ 00:13:58.978 START TEST raid_rebuild_test_sb_md_separate 00:13:58.978 ************************************ 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:58.978 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:58.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=85181 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 85181 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 85181 ']' 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:13:58.979 19:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:58.979 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:58.979 Zero copy mechanism will not be used. 00:13:58.979 [2024-11-26 19:54:49.828691] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:13:58.979 [2024-11-26 19:54:49.828804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85181 ] 00:13:59.237 [2024-11-26 19:54:49.983017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.237 [2024-11-26 19:54:50.076972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.495 [2024-11-26 19:54:50.195134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.496 [2024-11-26 19:54:50.195325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.754 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.754 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:13:59.754 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.754 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:13:59.754 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.754 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:00.013 BaseBdev1_malloc 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:00.013 [2024-11-26 19:54:50.699660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:00.013 [2024-11-26 19:54:50.699720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.013 [2024-11-26 19:54:50.699739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:00.013 [2024-11-26 19:54:50.699749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.013 [2024-11-26 19:54:50.701432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.013 [2024-11-26 19:54:50.701572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:00.013 BaseBdev1 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:00.013 BaseBdev2_malloc 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:00.013 [2024-11-26 19:54:50.733432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:00.013 [2024-11-26 19:54:50.733477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.013 [2024-11-26 19:54:50.733494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:00.013 [2024-11-26 19:54:50.733504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.013 [2024-11-26 19:54:50.735125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.013 [2024-11-26 19:54:50.735155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:00.013 BaseBdev2 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:00.013 spare_malloc 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:00.013 spare_delay 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:00.013 [2024-11-26 19:54:50.792410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:00.013 [2024-11-26 19:54:50.792458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.013 [2024-11-26 19:54:50.792474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:00.013 [2024-11-26 19:54:50.792483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.013 [2024-11-26 19:54:50.794141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.013 [2024-11-26 19:54:50.794173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:00.013 spare 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.013 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:00.013 [2024-11-26 19:54:50.800456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.013 [2024-11-26 19:54:50.802108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.013 [2024-11-26 19:54:50.802327] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:00.014 [2024-11-26 19:54:50.802402] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:00.014 [2024-11-26 19:54:50.802482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:00.014 [2024-11-26 19:54:50.802677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:00.014 [2024-11-26 19:54:50.802747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:00.014 [2024-11-26 19:54:50.802896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.014 "name": "raid_bdev1", 00:14:00.014 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:00.014 "strip_size_kb": 0, 00:14:00.014 "state": "online", 00:14:00.014 "raid_level": "raid1", 00:14:00.014 "superblock": true, 00:14:00.014 "num_base_bdevs": 2, 00:14:00.014 "num_base_bdevs_discovered": 2, 00:14:00.014 "num_base_bdevs_operational": 2, 00:14:00.014 "base_bdevs_list": [ 00:14:00.014 { 00:14:00.014 "name": "BaseBdev1", 00:14:00.014 "uuid": "cf09030e-fe97-5a76-9edc-1e3a3cb15d21", 00:14:00.014 "is_configured": true, 00:14:00.014 "data_offset": 256, 00:14:00.014 "data_size": 7936 00:14:00.014 }, 00:14:00.014 { 00:14:00.014 "name": "BaseBdev2", 00:14:00.014 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:00.014 "is_configured": true, 00:14:00.014 "data_offset": 256, 00:14:00.014 "data_size": 7936 00:14:00.014 } 00:14:00.014 ] 00:14:00.014 }' 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.014 19:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:00.272 [2024-11-26 19:54:51.100789] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.272 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:00.530 [2024-11-26 19:54:51.348643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:00.530 /dev/nbd0 00:14:00.530 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:00.530 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:00.530 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:00.530 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:14:00.530 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.531 1+0 records in 00:14:00.531 1+0 records out 00:14:00.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248089 s, 16.5 MB/s 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:00.531 19:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:14:01.465 7936+0 records in 00:14:01.465 7936+0 records out 00:14:01.465 32505856 bytes (33 MB, 31 MiB) copied, 0.637638 s, 51.0 MB/s 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:01.465 [2024-11-26 19:54:52.253988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:01.465 [2024-11-26 19:54:52.263320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.465 "name": "raid_bdev1", 00:14:01.465 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:01.465 "strip_size_kb": 0, 00:14:01.465 "state": "online", 00:14:01.465 "raid_level": "raid1", 00:14:01.465 "superblock": true, 00:14:01.465 "num_base_bdevs": 2, 00:14:01.465 "num_base_bdevs_discovered": 1, 00:14:01.465 "num_base_bdevs_operational": 1, 00:14:01.465 "base_bdevs_list": [ 00:14:01.465 { 00:14:01.465 "name": null, 00:14:01.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.465 "is_configured": false, 00:14:01.465 "data_offset": 0, 00:14:01.465 "data_size": 7936 00:14:01.465 }, 00:14:01.465 { 00:14:01.465 "name": "BaseBdev2", 00:14:01.465 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:01.465 "is_configured": true, 00:14:01.465 "data_offset": 256, 00:14:01.465 "data_size": 7936 00:14:01.465 } 00:14:01.465 ] 00:14:01.465 }' 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.465 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:01.725 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:01.725 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.725 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:01.725 [2024-11-26 19:54:52.579392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.725 [2024-11-26 19:54:52.587317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:14:01.725 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.725 19:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:01.725 [2024-11-26 19:54:52.589028] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.096 "name": "raid_bdev1", 00:14:03.096 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:03.096 "strip_size_kb": 0, 00:14:03.096 "state": "online", 00:14:03.096 "raid_level": "raid1", 00:14:03.096 "superblock": true, 00:14:03.096 "num_base_bdevs": 2, 00:14:03.096 "num_base_bdevs_discovered": 2, 00:14:03.096 "num_base_bdevs_operational": 2, 00:14:03.096 "process": { 00:14:03.096 "type": "rebuild", 00:14:03.096 "target": "spare", 00:14:03.096 "progress": { 00:14:03.096 "blocks": 2560, 00:14:03.096 "percent": 32 00:14:03.096 } 00:14:03.096 }, 00:14:03.096 "base_bdevs_list": [ 00:14:03.096 { 00:14:03.096 "name": "spare", 00:14:03.096 "uuid": "c16767c3-3c16-5e1a-86ea-13da8fe77281", 00:14:03.096 "is_configured": true, 00:14:03.096 "data_offset": 256, 00:14:03.096 "data_size": 7936 00:14:03.096 }, 00:14:03.096 { 00:14:03.096 "name": "BaseBdev2", 00:14:03.096 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:03.096 "is_configured": true, 00:14:03.096 "data_offset": 256, 00:14:03.096 "data_size": 7936 00:14:03.096 } 00:14:03.096 ] 00:14:03.096 }' 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.096 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:03.096 [2024-11-26 19:54:53.703199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.097 [2024-11-26 19:54:53.795733] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:03.097 [2024-11-26 19:54:53.795803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.097 [2024-11-26 19:54:53.795817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.097 [2024-11-26 19:54:53.795830] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.097 "name": "raid_bdev1", 00:14:03.097 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:03.097 "strip_size_kb": 0, 00:14:03.097 "state": "online", 00:14:03.097 "raid_level": "raid1", 00:14:03.097 "superblock": true, 00:14:03.097 "num_base_bdevs": 2, 00:14:03.097 "num_base_bdevs_discovered": 1, 00:14:03.097 "num_base_bdevs_operational": 1, 00:14:03.097 "base_bdevs_list": [ 00:14:03.097 { 00:14:03.097 "name": null, 00:14:03.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.097 "is_configured": false, 00:14:03.097 "data_offset": 0, 00:14:03.097 "data_size": 7936 00:14:03.097 }, 00:14:03.097 { 00:14:03.097 "name": "BaseBdev2", 00:14:03.097 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:03.097 "is_configured": true, 00:14:03.097 "data_offset": 256, 00:14:03.097 "data_size": 7936 00:14:03.097 } 00:14:03.097 ] 00:14:03.097 }' 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.097 19:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.355 "name": "raid_bdev1", 00:14:03.355 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:03.355 "strip_size_kb": 0, 00:14:03.355 "state": "online", 00:14:03.355 "raid_level": "raid1", 00:14:03.355 "superblock": true, 00:14:03.355 "num_base_bdevs": 2, 00:14:03.355 "num_base_bdevs_discovered": 1, 00:14:03.355 "num_base_bdevs_operational": 1, 00:14:03.355 "base_bdevs_list": [ 00:14:03.355 { 00:14:03.355 "name": null, 00:14:03.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.355 "is_configured": false, 00:14:03.355 "data_offset": 0, 00:14:03.355 "data_size": 7936 00:14:03.355 }, 00:14:03.355 { 00:14:03.355 "name": "BaseBdev2", 00:14:03.355 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:03.355 "is_configured": true, 00:14:03.355 "data_offset": 256, 00:14:03.355 "data_size": 7936 00:14:03.355 } 00:14:03.355 ] 00:14:03.355 }' 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:03.355 [2024-11-26 19:54:54.240636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.355 [2024-11-26 19:54:54.248580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.355 19:54:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:03.355 [2024-11-26 19:54:54.250291] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.728 "name": "raid_bdev1", 00:14:04.728 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:04.728 "strip_size_kb": 0, 00:14:04.728 "state": "online", 00:14:04.728 "raid_level": "raid1", 00:14:04.728 "superblock": true, 00:14:04.728 "num_base_bdevs": 2, 00:14:04.728 "num_base_bdevs_discovered": 2, 00:14:04.728 "num_base_bdevs_operational": 2, 00:14:04.728 "process": { 00:14:04.728 "type": "rebuild", 00:14:04.728 "target": "spare", 00:14:04.728 "progress": { 00:14:04.728 "blocks": 2560, 00:14:04.728 "percent": 32 00:14:04.728 } 00:14:04.728 }, 00:14:04.728 "base_bdevs_list": [ 00:14:04.728 { 00:14:04.728 "name": "spare", 00:14:04.728 "uuid": "c16767c3-3c16-5e1a-86ea-13da8fe77281", 00:14:04.728 "is_configured": true, 00:14:04.728 "data_offset": 256, 00:14:04.728 "data_size": 7936 00:14:04.728 }, 00:14:04.728 { 00:14:04.728 "name": "BaseBdev2", 00:14:04.728 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:04.728 "is_configured": true, 00:14:04.728 "data_offset": 256, 00:14:04.728 "data_size": 7936 00:14:04.728 } 00:14:04.728 ] 00:14:04.728 }' 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:04.728 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=565 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.728 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.729 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.729 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.729 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.729 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.729 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:04.729 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.729 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.729 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.729 "name": "raid_bdev1", 00:14:04.729 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:04.729 "strip_size_kb": 0, 00:14:04.729 "state": "online", 00:14:04.729 "raid_level": "raid1", 00:14:04.729 "superblock": true, 00:14:04.729 "num_base_bdevs": 2, 00:14:04.729 "num_base_bdevs_discovered": 2, 00:14:04.729 "num_base_bdevs_operational": 2, 00:14:04.729 "process": { 00:14:04.729 "type": "rebuild", 00:14:04.729 "target": "spare", 00:14:04.729 "progress": { 00:14:04.729 "blocks": 2816, 00:14:04.729 "percent": 35 00:14:04.729 } 00:14:04.729 }, 00:14:04.729 "base_bdevs_list": [ 00:14:04.729 { 00:14:04.729 "name": "spare", 00:14:04.729 "uuid": "c16767c3-3c16-5e1a-86ea-13da8fe77281", 00:14:04.729 "is_configured": true, 00:14:04.729 "data_offset": 256, 00:14:04.729 "data_size": 7936 00:14:04.729 }, 00:14:04.729 { 00:14:04.729 "name": "BaseBdev2", 00:14:04.729 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:04.729 "is_configured": true, 00:14:04.729 "data_offset": 256, 00:14:04.729 "data_size": 7936 00:14:04.729 } 00:14:04.729 ] 00:14:04.729 }' 00:14:04.729 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.729 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.729 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.729 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.729 19:54:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.663 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.663 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.663 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.663 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.663 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.663 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.663 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.663 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.663 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.663 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:05.663 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.663 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.663 "name": "raid_bdev1", 00:14:05.663 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:05.663 "strip_size_kb": 0, 00:14:05.663 "state": "online", 00:14:05.663 "raid_level": "raid1", 00:14:05.663 "superblock": true, 00:14:05.663 "num_base_bdevs": 2, 00:14:05.664 "num_base_bdevs_discovered": 2, 00:14:05.664 "num_base_bdevs_operational": 2, 00:14:05.664 "process": { 00:14:05.664 "type": "rebuild", 00:14:05.664 "target": "spare", 00:14:05.664 "progress": { 00:14:05.664 "blocks": 5632, 00:14:05.664 "percent": 70 00:14:05.664 } 00:14:05.664 }, 00:14:05.664 "base_bdevs_list": [ 00:14:05.664 { 00:14:05.664 "name": "spare", 00:14:05.664 "uuid": "c16767c3-3c16-5e1a-86ea-13da8fe77281", 00:14:05.664 "is_configured": true, 00:14:05.664 "data_offset": 256, 00:14:05.664 "data_size": 7936 00:14:05.664 }, 00:14:05.664 { 00:14:05.664 "name": "BaseBdev2", 00:14:05.664 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:05.664 "is_configured": true, 00:14:05.664 "data_offset": 256, 00:14:05.664 "data_size": 7936 00:14:05.664 } 00:14:05.664 ] 00:14:05.664 }' 00:14:05.664 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.664 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.664 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.664 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.664 19:54:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:06.598 [2024-11-26 19:54:57.367393] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:06.598 [2024-11-26 19:54:57.367473] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:06.598 [2024-11-26 19:54:57.367570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.857 "name": "raid_bdev1", 00:14:06.857 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:06.857 "strip_size_kb": 0, 00:14:06.857 "state": "online", 00:14:06.857 "raid_level": "raid1", 00:14:06.857 "superblock": true, 00:14:06.857 "num_base_bdevs": 2, 00:14:06.857 "num_base_bdevs_discovered": 2, 00:14:06.857 "num_base_bdevs_operational": 2, 00:14:06.857 "base_bdevs_list": [ 00:14:06.857 { 00:14:06.857 "name": "spare", 00:14:06.857 "uuid": "c16767c3-3c16-5e1a-86ea-13da8fe77281", 00:14:06.857 "is_configured": true, 00:14:06.857 "data_offset": 256, 00:14:06.857 "data_size": 7936 00:14:06.857 }, 00:14:06.857 { 00:14:06.857 "name": "BaseBdev2", 00:14:06.857 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:06.857 "is_configured": true, 00:14:06.857 "data_offset": 256, 00:14:06.857 "data_size": 7936 00:14:06.857 } 00:14:06.857 ] 00:14:06.857 }' 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.857 "name": "raid_bdev1", 00:14:06.857 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:06.857 "strip_size_kb": 0, 00:14:06.857 "state": "online", 00:14:06.857 "raid_level": "raid1", 00:14:06.857 "superblock": true, 00:14:06.857 "num_base_bdevs": 2, 00:14:06.857 "num_base_bdevs_discovered": 2, 00:14:06.857 "num_base_bdevs_operational": 2, 00:14:06.857 "base_bdevs_list": [ 00:14:06.857 { 00:14:06.857 "name": "spare", 00:14:06.857 "uuid": "c16767c3-3c16-5e1a-86ea-13da8fe77281", 00:14:06.857 "is_configured": true, 00:14:06.857 "data_offset": 256, 00:14:06.857 "data_size": 7936 00:14:06.857 }, 00:14:06.857 { 00:14:06.857 "name": "BaseBdev2", 00:14:06.857 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:06.857 "is_configured": true, 00:14:06.857 "data_offset": 256, 00:14:06.857 "data_size": 7936 00:14:06.857 } 00:14:06.857 ] 00:14:06.857 }' 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.857 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.857 "name": "raid_bdev1", 00:14:06.857 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:06.857 "strip_size_kb": 0, 00:14:06.857 "state": "online", 00:14:06.857 "raid_level": "raid1", 00:14:06.857 "superblock": true, 00:14:06.857 "num_base_bdevs": 2, 00:14:06.857 "num_base_bdevs_discovered": 2, 00:14:06.857 "num_base_bdevs_operational": 2, 00:14:06.857 "base_bdevs_list": [ 00:14:06.857 { 00:14:06.857 "name": "spare", 00:14:06.857 "uuid": "c16767c3-3c16-5e1a-86ea-13da8fe77281", 00:14:06.857 "is_configured": true, 00:14:06.857 "data_offset": 256, 00:14:06.857 "data_size": 7936 00:14:06.857 }, 00:14:06.857 { 00:14:06.858 "name": "BaseBdev2", 00:14:06.858 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:06.858 "is_configured": true, 00:14:06.858 "data_offset": 256, 00:14:06.858 "data_size": 7936 00:14:06.858 } 00:14:06.858 ] 00:14:06.858 }' 00:14:06.858 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.858 19:54:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:07.117 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:07.118 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.118 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:07.118 [2024-11-26 19:54:58.040364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.118 [2024-11-26 19:54:58.040393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.118 [2024-11-26 19:54:58.040472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.118 [2024-11-26 19:54:58.040540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.118 [2024-11-26 19:54:58.040549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:07.118 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.118 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:14:07.118 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.118 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.118 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:07.381 /dev/nbd0 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.381 1+0 records in 00:14:07.381 1+0 records out 00:14:07.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196461 s, 20.8 MB/s 00:14:07.381 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:07.640 /dev/nbd1 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.640 1+0 records in 00:14:07.640 1+0 records out 00:14:07.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216912 s, 18.9 MB/s 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.640 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:07.898 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:07.898 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.898 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:07.898 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.898 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:14:07.898 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.898 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:08.156 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:08.156 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:08.156 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:08.156 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.156 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.156 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:08.156 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:14:08.156 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.156 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.156 19:54:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:08.415 [2024-11-26 19:54:59.116068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:08.415 [2024-11-26 19:54:59.116119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.415 [2024-11-26 19:54:59.116139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:08.415 [2024-11-26 19:54:59.116148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.415 [2024-11-26 19:54:59.117963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.415 [2024-11-26 19:54:59.117996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:08.415 [2024-11-26 19:54:59.118054] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:08.415 [2024-11-26 19:54:59.118099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.415 [2024-11-26 19:54:59.118212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:08.415 spare 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.415 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:08.415 [2024-11-26 19:54:59.218281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:08.415 [2024-11-26 19:54:59.218304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:08.415 [2024-11-26 19:54:59.218405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:14:08.415 [2024-11-26 19:54:59.218545] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:08.415 [2024-11-26 19:54:59.218554] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:08.416 [2024-11-26 19:54:59.218657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.416 "name": "raid_bdev1", 00:14:08.416 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:08.416 "strip_size_kb": 0, 00:14:08.416 "state": "online", 00:14:08.416 "raid_level": "raid1", 00:14:08.416 "superblock": true, 00:14:08.416 "num_base_bdevs": 2, 00:14:08.416 "num_base_bdevs_discovered": 2, 00:14:08.416 "num_base_bdevs_operational": 2, 00:14:08.416 "base_bdevs_list": [ 00:14:08.416 { 00:14:08.416 "name": "spare", 00:14:08.416 "uuid": "c16767c3-3c16-5e1a-86ea-13da8fe77281", 00:14:08.416 "is_configured": true, 00:14:08.416 "data_offset": 256, 00:14:08.416 "data_size": 7936 00:14:08.416 }, 00:14:08.416 { 00:14:08.416 "name": "BaseBdev2", 00:14:08.416 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:08.416 "is_configured": true, 00:14:08.416 "data_offset": 256, 00:14:08.416 "data_size": 7936 00:14:08.416 } 00:14:08.416 ] 00:14:08.416 }' 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.416 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:08.675 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.675 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.675 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.675 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.675 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.675 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.675 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.675 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.675 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:08.675 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.675 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.675 "name": "raid_bdev1", 00:14:08.675 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:08.675 "strip_size_kb": 0, 00:14:08.675 "state": "online", 00:14:08.675 "raid_level": "raid1", 00:14:08.675 "superblock": true, 00:14:08.675 "num_base_bdevs": 2, 00:14:08.675 "num_base_bdevs_discovered": 2, 00:14:08.675 "num_base_bdevs_operational": 2, 00:14:08.675 "base_bdevs_list": [ 00:14:08.675 { 00:14:08.675 "name": "spare", 00:14:08.675 "uuid": "c16767c3-3c16-5e1a-86ea-13da8fe77281", 00:14:08.675 "is_configured": true, 00:14:08.675 "data_offset": 256, 00:14:08.675 "data_size": 7936 00:14:08.675 }, 00:14:08.675 { 00:14:08.675 "name": "BaseBdev2", 00:14:08.675 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:08.675 "is_configured": true, 00:14:08.675 "data_offset": 256, 00:14:08.675 "data_size": 7936 00:14:08.675 } 00:14:08.675 ] 00:14:08.675 }' 00:14:08.675 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:08.933 [2024-11-26 19:54:59.676197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.933 "name": "raid_bdev1", 00:14:08.933 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:08.933 "strip_size_kb": 0, 00:14:08.933 "state": "online", 00:14:08.933 "raid_level": "raid1", 00:14:08.933 "superblock": true, 00:14:08.933 "num_base_bdevs": 2, 00:14:08.933 "num_base_bdevs_discovered": 1, 00:14:08.933 "num_base_bdevs_operational": 1, 00:14:08.933 "base_bdevs_list": [ 00:14:08.933 { 00:14:08.933 "name": null, 00:14:08.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.933 "is_configured": false, 00:14:08.933 "data_offset": 0, 00:14:08.933 "data_size": 7936 00:14:08.933 }, 00:14:08.933 { 00:14:08.933 "name": "BaseBdev2", 00:14:08.933 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:08.933 "is_configured": true, 00:14:08.933 "data_offset": 256, 00:14:08.933 "data_size": 7936 00:14:08.933 } 00:14:08.933 ] 00:14:08.933 }' 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.933 19:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:09.192 19:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:09.192 19:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.192 19:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:09.192 [2024-11-26 19:55:00.008296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.192 [2024-11-26 19:55:00.008492] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:09.192 [2024-11-26 19:55:00.008507] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:09.192 [2024-11-26 19:55:00.008545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.192 [2024-11-26 19:55:00.015929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:14:09.192 19:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.192 19:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:09.192 [2024-11-26 19:55:00.017619] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:10.126 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.126 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.126 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.126 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.126 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.126 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.126 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.126 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.126 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:10.126 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.126 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.126 "name": "raid_bdev1", 00:14:10.126 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:10.126 "strip_size_kb": 0, 00:14:10.126 "state": "online", 00:14:10.126 "raid_level": "raid1", 00:14:10.126 "superblock": true, 00:14:10.126 "num_base_bdevs": 2, 00:14:10.126 "num_base_bdevs_discovered": 2, 00:14:10.126 "num_base_bdevs_operational": 2, 00:14:10.126 "process": { 00:14:10.126 "type": "rebuild", 00:14:10.126 "target": "spare", 00:14:10.126 "progress": { 00:14:10.126 "blocks": 2560, 00:14:10.126 "percent": 32 00:14:10.126 } 00:14:10.126 }, 00:14:10.126 "base_bdevs_list": [ 00:14:10.126 { 00:14:10.126 "name": "spare", 00:14:10.126 "uuid": "c16767c3-3c16-5e1a-86ea-13da8fe77281", 00:14:10.126 "is_configured": true, 00:14:10.126 "data_offset": 256, 00:14:10.126 "data_size": 7936 00:14:10.126 }, 00:14:10.126 { 00:14:10.126 "name": "BaseBdev2", 00:14:10.126 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:10.126 "is_configured": true, 00:14:10.126 "data_offset": 256, 00:14:10.126 "data_size": 7936 00:14:10.126 } 00:14:10.126 ] 00:14:10.126 }' 00:14:10.127 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:10.385 [2024-11-26 19:55:01.128416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:10.385 [2024-11-26 19:55:01.224370] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:10.385 [2024-11-26 19:55:01.224424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.385 [2024-11-26 19:55:01.224437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:10.385 [2024-11-26 19:55:01.224451] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.385 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.385 "name": "raid_bdev1", 00:14:10.385 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:10.385 "strip_size_kb": 0, 00:14:10.385 "state": "online", 00:14:10.385 "raid_level": "raid1", 00:14:10.385 "superblock": true, 00:14:10.385 "num_base_bdevs": 2, 00:14:10.386 "num_base_bdevs_discovered": 1, 00:14:10.386 "num_base_bdevs_operational": 1, 00:14:10.386 "base_bdevs_list": [ 00:14:10.386 { 00:14:10.386 "name": null, 00:14:10.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.386 "is_configured": false, 00:14:10.386 "data_offset": 0, 00:14:10.386 "data_size": 7936 00:14:10.386 }, 00:14:10.386 { 00:14:10.386 "name": "BaseBdev2", 00:14:10.386 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:10.386 "is_configured": true, 00:14:10.386 "data_offset": 256, 00:14:10.386 "data_size": 7936 00:14:10.386 } 00:14:10.386 ] 00:14:10.386 }' 00:14:10.386 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.386 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:10.644 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:10.644 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.644 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:10.644 [2024-11-26 19:55:01.545011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:10.644 [2024-11-26 19:55:01.545071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.644 [2024-11-26 19:55:01.545096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:10.644 [2024-11-26 19:55:01.545106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.644 [2024-11-26 19:55:01.545351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.644 [2024-11-26 19:55:01.545364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:10.644 [2024-11-26 19:55:01.545418] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:10.644 [2024-11-26 19:55:01.545430] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:10.644 [2024-11-26 19:55:01.545439] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:10.644 [2024-11-26 19:55:01.545463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:10.644 [2024-11-26 19:55:01.552595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:14:10.644 spare 00:14:10.644 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.644 19:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:10.645 [2024-11-26 19:55:01.554259] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:12.018 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.018 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.018 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.018 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.018 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.018 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.018 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.018 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.018 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:12.018 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.018 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.018 "name": "raid_bdev1", 00:14:12.018 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:12.018 "strip_size_kb": 0, 00:14:12.018 "state": "online", 00:14:12.018 "raid_level": "raid1", 00:14:12.018 "superblock": true, 00:14:12.018 "num_base_bdevs": 2, 00:14:12.018 "num_base_bdevs_discovered": 2, 00:14:12.018 "num_base_bdevs_operational": 2, 00:14:12.018 "process": { 00:14:12.018 "type": "rebuild", 00:14:12.018 "target": "spare", 00:14:12.018 "progress": { 00:14:12.018 "blocks": 2560, 00:14:12.018 "percent": 32 00:14:12.018 } 00:14:12.018 }, 00:14:12.019 "base_bdevs_list": [ 00:14:12.019 { 00:14:12.019 "name": "spare", 00:14:12.019 "uuid": "c16767c3-3c16-5e1a-86ea-13da8fe77281", 00:14:12.019 "is_configured": true, 00:14:12.019 "data_offset": 256, 00:14:12.019 "data_size": 7936 00:14:12.019 }, 00:14:12.019 { 00:14:12.019 "name": "BaseBdev2", 00:14:12.019 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:12.019 "is_configured": true, 00:14:12.019 "data_offset": 256, 00:14:12.019 "data_size": 7936 00:14:12.019 } 00:14:12.019 ] 00:14:12.019 }' 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:12.019 [2024-11-26 19:55:02.660933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.019 [2024-11-26 19:55:02.760844] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:12.019 [2024-11-26 19:55:02.760902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.019 [2024-11-26 19:55:02.760918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.019 [2024-11-26 19:55:02.760925] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.019 "name": "raid_bdev1", 00:14:12.019 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:12.019 "strip_size_kb": 0, 00:14:12.019 "state": "online", 00:14:12.019 "raid_level": "raid1", 00:14:12.019 "superblock": true, 00:14:12.019 "num_base_bdevs": 2, 00:14:12.019 "num_base_bdevs_discovered": 1, 00:14:12.019 "num_base_bdevs_operational": 1, 00:14:12.019 "base_bdevs_list": [ 00:14:12.019 { 00:14:12.019 "name": null, 00:14:12.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.019 "is_configured": false, 00:14:12.019 "data_offset": 0, 00:14:12.019 "data_size": 7936 00:14:12.019 }, 00:14:12.019 { 00:14:12.019 "name": "BaseBdev2", 00:14:12.019 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:12.019 "is_configured": true, 00:14:12.019 "data_offset": 256, 00:14:12.019 "data_size": 7936 00:14:12.019 } 00:14:12.019 ] 00:14:12.019 }' 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.019 19:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.277 "name": "raid_bdev1", 00:14:12.277 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:12.277 "strip_size_kb": 0, 00:14:12.277 "state": "online", 00:14:12.277 "raid_level": "raid1", 00:14:12.277 "superblock": true, 00:14:12.277 "num_base_bdevs": 2, 00:14:12.277 "num_base_bdevs_discovered": 1, 00:14:12.277 "num_base_bdevs_operational": 1, 00:14:12.277 "base_bdevs_list": [ 00:14:12.277 { 00:14:12.277 "name": null, 00:14:12.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.277 "is_configured": false, 00:14:12.277 "data_offset": 0, 00:14:12.277 "data_size": 7936 00:14:12.277 }, 00:14:12.277 { 00:14:12.277 "name": "BaseBdev2", 00:14:12.277 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:12.277 "is_configured": true, 00:14:12.277 "data_offset": 256, 00:14:12.277 "data_size": 7936 00:14:12.277 } 00:14:12.277 ] 00:14:12.277 }' 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.277 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:12.277 [2024-11-26 19:55:03.201488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:12.277 [2024-11-26 19:55:03.201538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.278 [2024-11-26 19:55:03.201557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:12.278 [2024-11-26 19:55:03.201565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.278 [2024-11-26 19:55:03.201769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.278 [2024-11-26 19:55:03.201778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:12.278 [2024-11-26 19:55:03.201823] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:12.278 [2024-11-26 19:55:03.201835] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:12.278 [2024-11-26 19:55:03.201843] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:12.278 [2024-11-26 19:55:03.201852] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:12.278 BaseBdev1 00:14:12.278 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.278 19:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.652 "name": "raid_bdev1", 00:14:13.652 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:13.652 "strip_size_kb": 0, 00:14:13.652 "state": "online", 00:14:13.652 "raid_level": "raid1", 00:14:13.652 "superblock": true, 00:14:13.652 "num_base_bdevs": 2, 00:14:13.652 "num_base_bdevs_discovered": 1, 00:14:13.652 "num_base_bdevs_operational": 1, 00:14:13.652 "base_bdevs_list": [ 00:14:13.652 { 00:14:13.652 "name": null, 00:14:13.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.652 "is_configured": false, 00:14:13.652 "data_offset": 0, 00:14:13.652 "data_size": 7936 00:14:13.652 }, 00:14:13.652 { 00:14:13.652 "name": "BaseBdev2", 00:14:13.652 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:13.652 "is_configured": true, 00:14:13.652 "data_offset": 256, 00:14:13.652 "data_size": 7936 00:14:13.652 } 00:14:13.652 ] 00:14:13.652 }' 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.652 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.652 "name": "raid_bdev1", 00:14:13.652 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:13.652 "strip_size_kb": 0, 00:14:13.652 "state": "online", 00:14:13.652 "raid_level": "raid1", 00:14:13.652 "superblock": true, 00:14:13.652 "num_base_bdevs": 2, 00:14:13.652 "num_base_bdevs_discovered": 1, 00:14:13.653 "num_base_bdevs_operational": 1, 00:14:13.653 "base_bdevs_list": [ 00:14:13.653 { 00:14:13.653 "name": null, 00:14:13.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.653 "is_configured": false, 00:14:13.653 "data_offset": 0, 00:14:13.653 "data_size": 7936 00:14:13.653 }, 00:14:13.653 { 00:14:13.653 "name": "BaseBdev2", 00:14:13.653 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:13.653 "is_configured": true, 00:14:13.653 "data_offset": 256, 00:14:13.653 "data_size": 7936 00:14:13.653 } 00:14:13.653 ] 00:14:13.653 }' 00:14:13.653 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.653 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.653 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:13.911 [2024-11-26 19:55:04.617795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.911 [2024-11-26 19:55:04.617956] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:13.911 [2024-11-26 19:55:04.617970] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:13.911 request: 00:14:13.911 { 00:14:13.911 "base_bdev": "BaseBdev1", 00:14:13.911 "raid_bdev": "raid_bdev1", 00:14:13.911 "method": "bdev_raid_add_base_bdev", 00:14:13.911 "req_id": 1 00:14:13.911 } 00:14:13.911 Got JSON-RPC error response 00:14:13.911 response: 00:14:13.911 { 00:14:13.911 "code": -22, 00:14:13.911 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:13.911 } 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:13.911 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:13.912 19:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:14.844 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:14.844 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.844 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.844 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.844 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.844 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:14.844 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.844 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.844 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.844 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.844 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.844 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.844 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.844 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:14.845 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.845 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.845 "name": "raid_bdev1", 00:14:14.845 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:14.845 "strip_size_kb": 0, 00:14:14.845 "state": "online", 00:14:14.845 "raid_level": "raid1", 00:14:14.845 "superblock": true, 00:14:14.845 "num_base_bdevs": 2, 00:14:14.845 "num_base_bdevs_discovered": 1, 00:14:14.845 "num_base_bdevs_operational": 1, 00:14:14.845 "base_bdevs_list": [ 00:14:14.845 { 00:14:14.845 "name": null, 00:14:14.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.845 "is_configured": false, 00:14:14.845 "data_offset": 0, 00:14:14.845 "data_size": 7936 00:14:14.845 }, 00:14:14.845 { 00:14:14.845 "name": "BaseBdev2", 00:14:14.845 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:14.845 "is_configured": true, 00:14:14.845 "data_offset": 256, 00:14:14.845 "data_size": 7936 00:14:14.845 } 00:14:14.845 ] 00:14:14.845 }' 00:14:14.845 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.845 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:15.103 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.103 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.103 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.103 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.103 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.103 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.103 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.103 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.103 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:15.103 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.103 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.103 "name": "raid_bdev1", 00:14:15.103 "uuid": "f1b3b75c-6351-4ab1-875a-9788de294e1d", 00:14:15.103 "strip_size_kb": 0, 00:14:15.103 "state": "online", 00:14:15.103 "raid_level": "raid1", 00:14:15.103 "superblock": true, 00:14:15.103 "num_base_bdevs": 2, 00:14:15.103 "num_base_bdevs_discovered": 1, 00:14:15.103 "num_base_bdevs_operational": 1, 00:14:15.103 "base_bdevs_list": [ 00:14:15.103 { 00:14:15.103 "name": null, 00:14:15.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.103 "is_configured": false, 00:14:15.103 "data_offset": 0, 00:14:15.103 "data_size": 7936 00:14:15.103 }, 00:14:15.103 { 00:14:15.103 "name": "BaseBdev2", 00:14:15.103 "uuid": "112bc69b-1ab9-5b56-a9b6-fed9b59c34ba", 00:14:15.103 "is_configured": true, 00:14:15.103 "data_offset": 256, 00:14:15.103 "data_size": 7936 00:14:15.103 } 00:14:15.103 ] 00:14:15.103 }' 00:14:15.103 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.103 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.103 19:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.103 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.103 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 85181 00:14:15.103 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 85181 ']' 00:14:15.103 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 85181 00:14:15.103 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:14:15.103 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.103 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85181 00:14:15.361 killing process with pid 85181 00:14:15.361 Received shutdown signal, test time was about 60.000000 seconds 00:14:15.361 00:14:15.361 Latency(us) 00:14:15.361 [2024-11-26T19:55:06.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.361 [2024-11-26T19:55:06.298Z] =================================================================================================================== 00:14:15.361 [2024-11-26T19:55:06.298Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:15.361 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.361 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.361 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85181' 00:14:15.361 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 85181 00:14:15.361 [2024-11-26 19:55:06.052441] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:15.361 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 85181 00:14:15.361 [2024-11-26 19:55:06.052549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.361 [2024-11-26 19:55:06.052594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.361 [2024-11-26 19:55:06.052603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:15.361 [2024-11-26 19:55:06.217916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.927 ************************************ 00:14:15.927 END TEST raid_rebuild_test_sb_md_separate 00:14:15.927 ************************************ 00:14:15.927 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:14:15.927 00:14:15.927 real 0m17.047s 00:14:15.927 user 0m21.598s 00:14:15.927 sys 0m1.919s 00:14:15.927 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.927 19:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:15.927 19:55:06 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:14:15.927 19:55:06 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:14:15.927 19:55:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:15.927 19:55:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.927 19:55:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:15.927 ************************************ 00:14:15.927 START TEST raid_state_function_test_sb_md_interleaved 00:14:15.927 ************************************ 00:14:15.927 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:14:15.927 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:15.927 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:15.927 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:15.927 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:15.927 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:15.927 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:15.927 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:15.927 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:15.927 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:15.927 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:15.927 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:15.928 Process raid pid: 85849 00:14:15.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=85849 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85849' 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 85849 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 85849 ']' 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:15.928 19:55:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:16.186 [2024-11-26 19:55:06.919544] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:14:16.186 [2024-11-26 19:55:06.919676] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.186 [2024-11-26 19:55:07.077136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.444 [2024-11-26 19:55:07.170366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.444 [2024-11-26 19:55:07.290640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.444 [2024-11-26 19:55:07.290673] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:17.010 [2024-11-26 19:55:07.764847] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:17.010 [2024-11-26 19:55:07.764892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:17.010 [2024-11-26 19:55:07.764905] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.010 [2024-11-26 19:55:07.764913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.010 "name": "Existed_Raid", 00:14:17.010 "uuid": "c0f38312-e61e-4e94-bafa-0db35b3474e0", 00:14:17.010 "strip_size_kb": 0, 00:14:17.010 "state": "configuring", 00:14:17.010 "raid_level": "raid1", 00:14:17.010 "superblock": true, 00:14:17.010 "num_base_bdevs": 2, 00:14:17.010 "num_base_bdevs_discovered": 0, 00:14:17.010 "num_base_bdevs_operational": 2, 00:14:17.010 "base_bdevs_list": [ 00:14:17.010 { 00:14:17.010 "name": "BaseBdev1", 00:14:17.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.010 "is_configured": false, 00:14:17.010 "data_offset": 0, 00:14:17.010 "data_size": 0 00:14:17.010 }, 00:14:17.010 { 00:14:17.010 "name": "BaseBdev2", 00:14:17.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.010 "is_configured": false, 00:14:17.010 "data_offset": 0, 00:14:17.010 "data_size": 0 00:14:17.010 } 00:14:17.010 ] 00:14:17.010 }' 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.010 19:55:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:17.269 [2024-11-26 19:55:08.084864] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:17.269 [2024-11-26 19:55:08.084895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:17.269 [2024-11-26 19:55:08.092854] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:17.269 [2024-11-26 19:55:08.092891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:17.269 [2024-11-26 19:55:08.092899] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.269 [2024-11-26 19:55:08.092909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:17.269 [2024-11-26 19:55:08.122773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.269 BaseBdev1 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.269 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:17.269 [ 00:14:17.269 { 00:14:17.269 "name": "BaseBdev1", 00:14:17.269 "aliases": [ 00:14:17.269 "a6e2e196-f654-4542-b0c2-b2acbfa3d136" 00:14:17.269 ], 00:14:17.269 "product_name": "Malloc disk", 00:14:17.269 "block_size": 4128, 00:14:17.269 "num_blocks": 8192, 00:14:17.269 "uuid": "a6e2e196-f654-4542-b0c2-b2acbfa3d136", 00:14:17.269 "md_size": 32, 00:14:17.269 "md_interleave": true, 00:14:17.269 "dif_type": 0, 00:14:17.269 "assigned_rate_limits": { 00:14:17.269 "rw_ios_per_sec": 0, 00:14:17.269 "rw_mbytes_per_sec": 0, 00:14:17.269 "r_mbytes_per_sec": 0, 00:14:17.269 "w_mbytes_per_sec": 0 00:14:17.269 }, 00:14:17.269 "claimed": true, 00:14:17.269 "claim_type": "exclusive_write", 00:14:17.269 "zoned": false, 00:14:17.269 "supported_io_types": { 00:14:17.269 "read": true, 00:14:17.269 "write": true, 00:14:17.269 "unmap": true, 00:14:17.269 "flush": true, 00:14:17.269 "reset": true, 00:14:17.269 "nvme_admin": false, 00:14:17.269 "nvme_io": false, 00:14:17.269 "nvme_io_md": false, 00:14:17.269 "write_zeroes": true, 00:14:17.269 "zcopy": true, 00:14:17.269 "get_zone_info": false, 00:14:17.269 "zone_management": false, 00:14:17.269 "zone_append": false, 00:14:17.269 "compare": false, 00:14:17.269 "compare_and_write": false, 00:14:17.270 "abort": true, 00:14:17.270 "seek_hole": false, 00:14:17.270 "seek_data": false, 00:14:17.270 "copy": true, 00:14:17.270 "nvme_iov_md": false 00:14:17.270 }, 00:14:17.270 "memory_domains": [ 00:14:17.270 { 00:14:17.270 "dma_device_id": "system", 00:14:17.270 "dma_device_type": 1 00:14:17.270 }, 00:14:17.270 { 00:14:17.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.270 "dma_device_type": 2 00:14:17.270 } 00:14:17.270 ], 00:14:17.270 "driver_specific": {} 00:14:17.270 } 00:14:17.270 ] 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.270 "name": "Existed_Raid", 00:14:17.270 "uuid": "9015e698-9921-44fe-97da-1d67c5304b64", 00:14:17.270 "strip_size_kb": 0, 00:14:17.270 "state": "configuring", 00:14:17.270 "raid_level": "raid1", 00:14:17.270 "superblock": true, 00:14:17.270 "num_base_bdevs": 2, 00:14:17.270 "num_base_bdevs_discovered": 1, 00:14:17.270 "num_base_bdevs_operational": 2, 00:14:17.270 "base_bdevs_list": [ 00:14:17.270 { 00:14:17.270 "name": "BaseBdev1", 00:14:17.270 "uuid": "a6e2e196-f654-4542-b0c2-b2acbfa3d136", 00:14:17.270 "is_configured": true, 00:14:17.270 "data_offset": 256, 00:14:17.270 "data_size": 7936 00:14:17.270 }, 00:14:17.270 { 00:14:17.270 "name": "BaseBdev2", 00:14:17.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.270 "is_configured": false, 00:14:17.270 "data_offset": 0, 00:14:17.270 "data_size": 0 00:14:17.270 } 00:14:17.270 ] 00:14:17.270 }' 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.270 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:17.836 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:17.836 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.836 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:17.837 [2024-11-26 19:55:08.474900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:17.837 [2024-11-26 19:55:08.474948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:17.837 [2024-11-26 19:55:08.482937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.837 [2024-11-26 19:55:08.484695] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.837 [2024-11-26 19:55:08.484733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.837 "name": "Existed_Raid", 00:14:17.837 "uuid": "f91011f8-e34c-4680-afc1-7cec7dc62fc4", 00:14:17.837 "strip_size_kb": 0, 00:14:17.837 "state": "configuring", 00:14:17.837 "raid_level": "raid1", 00:14:17.837 "superblock": true, 00:14:17.837 "num_base_bdevs": 2, 00:14:17.837 "num_base_bdevs_discovered": 1, 00:14:17.837 "num_base_bdevs_operational": 2, 00:14:17.837 "base_bdevs_list": [ 00:14:17.837 { 00:14:17.837 "name": "BaseBdev1", 00:14:17.837 "uuid": "a6e2e196-f654-4542-b0c2-b2acbfa3d136", 00:14:17.837 "is_configured": true, 00:14:17.837 "data_offset": 256, 00:14:17.837 "data_size": 7936 00:14:17.837 }, 00:14:17.837 { 00:14:17.837 "name": "BaseBdev2", 00:14:17.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.837 "is_configured": false, 00:14:17.837 "data_offset": 0, 00:14:17.837 "data_size": 0 00:14:17.837 } 00:14:17.837 ] 00:14:17.837 }' 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.837 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.095 [2024-11-26 19:55:08.811393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.095 [2024-11-26 19:55:08.811561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:18.095 [2024-11-26 19:55:08.811571] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:14:18.095 [2024-11-26 19:55:08.811642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:18.095 [2024-11-26 19:55:08.811702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:18.095 [2024-11-26 19:55:08.811711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:18.095 [2024-11-26 19:55:08.811763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.095 BaseBdev2 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.095 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.095 [ 00:14:18.095 { 00:14:18.095 "name": "BaseBdev2", 00:14:18.095 "aliases": [ 00:14:18.095 "1c6476ed-64d7-480a-b0e6-9e4cb1ba88d8" 00:14:18.095 ], 00:14:18.095 "product_name": "Malloc disk", 00:14:18.095 "block_size": 4128, 00:14:18.095 "num_blocks": 8192, 00:14:18.095 "uuid": "1c6476ed-64d7-480a-b0e6-9e4cb1ba88d8", 00:14:18.096 "md_size": 32, 00:14:18.096 "md_interleave": true, 00:14:18.096 "dif_type": 0, 00:14:18.096 "assigned_rate_limits": { 00:14:18.096 "rw_ios_per_sec": 0, 00:14:18.096 "rw_mbytes_per_sec": 0, 00:14:18.096 "r_mbytes_per_sec": 0, 00:14:18.096 "w_mbytes_per_sec": 0 00:14:18.096 }, 00:14:18.096 "claimed": true, 00:14:18.096 "claim_type": "exclusive_write", 00:14:18.096 "zoned": false, 00:14:18.096 "supported_io_types": { 00:14:18.096 "read": true, 00:14:18.096 "write": true, 00:14:18.096 "unmap": true, 00:14:18.096 "flush": true, 00:14:18.096 "reset": true, 00:14:18.096 "nvme_admin": false, 00:14:18.096 "nvme_io": false, 00:14:18.096 "nvme_io_md": false, 00:14:18.096 "write_zeroes": true, 00:14:18.096 "zcopy": true, 00:14:18.096 "get_zone_info": false, 00:14:18.096 "zone_management": false, 00:14:18.096 "zone_append": false, 00:14:18.096 "compare": false, 00:14:18.096 "compare_and_write": false, 00:14:18.096 "abort": true, 00:14:18.096 "seek_hole": false, 00:14:18.096 "seek_data": false, 00:14:18.096 "copy": true, 00:14:18.096 "nvme_iov_md": false 00:14:18.096 }, 00:14:18.096 "memory_domains": [ 00:14:18.096 { 00:14:18.096 "dma_device_id": "system", 00:14:18.096 "dma_device_type": 1 00:14:18.096 }, 00:14:18.096 { 00:14:18.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.096 "dma_device_type": 2 00:14:18.096 } 00:14:18.096 ], 00:14:18.096 "driver_specific": {} 00:14:18.096 } 00:14:18.096 ] 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.096 "name": "Existed_Raid", 00:14:18.096 "uuid": "f91011f8-e34c-4680-afc1-7cec7dc62fc4", 00:14:18.096 "strip_size_kb": 0, 00:14:18.096 "state": "online", 00:14:18.096 "raid_level": "raid1", 00:14:18.096 "superblock": true, 00:14:18.096 "num_base_bdevs": 2, 00:14:18.096 "num_base_bdevs_discovered": 2, 00:14:18.096 "num_base_bdevs_operational": 2, 00:14:18.096 "base_bdevs_list": [ 00:14:18.096 { 00:14:18.096 "name": "BaseBdev1", 00:14:18.096 "uuid": "a6e2e196-f654-4542-b0c2-b2acbfa3d136", 00:14:18.096 "is_configured": true, 00:14:18.096 "data_offset": 256, 00:14:18.096 "data_size": 7936 00:14:18.096 }, 00:14:18.096 { 00:14:18.096 "name": "BaseBdev2", 00:14:18.096 "uuid": "1c6476ed-64d7-480a-b0e6-9e4cb1ba88d8", 00:14:18.096 "is_configured": true, 00:14:18.096 "data_offset": 256, 00:14:18.096 "data_size": 7936 00:14:18.096 } 00:14:18.096 ] 00:14:18.096 }' 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.096 19:55:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.353 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:18.353 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:18.353 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:18.353 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:18.353 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:14:18.353 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:18.353 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:18.353 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:18.353 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.353 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.353 [2024-11-26 19:55:09.151770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.353 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.353 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:18.353 "name": "Existed_Raid", 00:14:18.353 "aliases": [ 00:14:18.353 "f91011f8-e34c-4680-afc1-7cec7dc62fc4" 00:14:18.353 ], 00:14:18.353 "product_name": "Raid Volume", 00:14:18.353 "block_size": 4128, 00:14:18.353 "num_blocks": 7936, 00:14:18.353 "uuid": "f91011f8-e34c-4680-afc1-7cec7dc62fc4", 00:14:18.353 "md_size": 32, 00:14:18.353 "md_interleave": true, 00:14:18.353 "dif_type": 0, 00:14:18.353 "assigned_rate_limits": { 00:14:18.353 "rw_ios_per_sec": 0, 00:14:18.353 "rw_mbytes_per_sec": 0, 00:14:18.353 "r_mbytes_per_sec": 0, 00:14:18.353 "w_mbytes_per_sec": 0 00:14:18.353 }, 00:14:18.353 "claimed": false, 00:14:18.353 "zoned": false, 00:14:18.353 "supported_io_types": { 00:14:18.353 "read": true, 00:14:18.353 "write": true, 00:14:18.353 "unmap": false, 00:14:18.353 "flush": false, 00:14:18.353 "reset": true, 00:14:18.353 "nvme_admin": false, 00:14:18.353 "nvme_io": false, 00:14:18.353 "nvme_io_md": false, 00:14:18.353 "write_zeroes": true, 00:14:18.353 "zcopy": false, 00:14:18.353 "get_zone_info": false, 00:14:18.353 "zone_management": false, 00:14:18.353 "zone_append": false, 00:14:18.354 "compare": false, 00:14:18.354 "compare_and_write": false, 00:14:18.354 "abort": false, 00:14:18.354 "seek_hole": false, 00:14:18.354 "seek_data": false, 00:14:18.354 "copy": false, 00:14:18.354 "nvme_iov_md": false 00:14:18.354 }, 00:14:18.354 "memory_domains": [ 00:14:18.354 { 00:14:18.354 "dma_device_id": "system", 00:14:18.354 "dma_device_type": 1 00:14:18.354 }, 00:14:18.354 { 00:14:18.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.354 "dma_device_type": 2 00:14:18.354 }, 00:14:18.354 { 00:14:18.354 "dma_device_id": "system", 00:14:18.354 "dma_device_type": 1 00:14:18.354 }, 00:14:18.354 { 00:14:18.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.354 "dma_device_type": 2 00:14:18.354 } 00:14:18.354 ], 00:14:18.354 "driver_specific": { 00:14:18.354 "raid": { 00:14:18.354 "uuid": "f91011f8-e34c-4680-afc1-7cec7dc62fc4", 00:14:18.354 "strip_size_kb": 0, 00:14:18.354 "state": "online", 00:14:18.354 "raid_level": "raid1", 00:14:18.354 "superblock": true, 00:14:18.354 "num_base_bdevs": 2, 00:14:18.354 "num_base_bdevs_discovered": 2, 00:14:18.354 "num_base_bdevs_operational": 2, 00:14:18.354 "base_bdevs_list": [ 00:14:18.354 { 00:14:18.354 "name": "BaseBdev1", 00:14:18.354 "uuid": "a6e2e196-f654-4542-b0c2-b2acbfa3d136", 00:14:18.354 "is_configured": true, 00:14:18.354 "data_offset": 256, 00:14:18.354 "data_size": 7936 00:14:18.354 }, 00:14:18.354 { 00:14:18.354 "name": "BaseBdev2", 00:14:18.354 "uuid": "1c6476ed-64d7-480a-b0e6-9e4cb1ba88d8", 00:14:18.354 "is_configured": true, 00:14:18.354 "data_offset": 256, 00:14:18.354 "data_size": 7936 00:14:18.354 } 00:14:18.354 ] 00:14:18.354 } 00:14:18.354 } 00:14:18.354 }' 00:14:18.354 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:18.354 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:18.354 BaseBdev2' 00:14:18.354 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.354 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:14:18.354 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.354 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:18.354 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.354 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.354 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.354 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.354 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:14:18.354 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:14:18.354 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.612 [2024-11-26 19:55:09.327554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.612 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.612 "name": "Existed_Raid", 00:14:18.612 "uuid": "f91011f8-e34c-4680-afc1-7cec7dc62fc4", 00:14:18.612 "strip_size_kb": 0, 00:14:18.612 "state": "online", 00:14:18.612 "raid_level": "raid1", 00:14:18.612 "superblock": true, 00:14:18.612 "num_base_bdevs": 2, 00:14:18.612 "num_base_bdevs_discovered": 1, 00:14:18.613 "num_base_bdevs_operational": 1, 00:14:18.613 "base_bdevs_list": [ 00:14:18.613 { 00:14:18.613 "name": null, 00:14:18.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.613 "is_configured": false, 00:14:18.613 "data_offset": 0, 00:14:18.613 "data_size": 7936 00:14:18.613 }, 00:14:18.613 { 00:14:18.613 "name": "BaseBdev2", 00:14:18.613 "uuid": "1c6476ed-64d7-480a-b0e6-9e4cb1ba88d8", 00:14:18.613 "is_configured": true, 00:14:18.613 "data_offset": 256, 00:14:18.613 "data_size": 7936 00:14:18.613 } 00:14:18.613 ] 00:14:18.613 }' 00:14:18.613 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.613 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.871 [2024-11-26 19:55:09.717737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:18.871 [2024-11-26 19:55:09.717838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.871 [2024-11-26 19:55:09.767539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.871 [2024-11-26 19:55:09.767682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.871 [2024-11-26 19:55:09.767748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.871 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:19.129 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:19.129 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:19.129 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 85849 00:14:19.129 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 85849 ']' 00:14:19.129 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 85849 00:14:19.129 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:14:19.129 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.129 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85849 00:14:19.129 killing process with pid 85849 00:14:19.129 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:19.129 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:19.129 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85849' 00:14:19.129 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 85849 00:14:19.129 [2024-11-26 19:55:09.832549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.129 19:55:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 85849 00:14:19.129 [2024-11-26 19:55:09.841379] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:19.696 19:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:14:19.696 ************************************ 00:14:19.696 END TEST raid_state_function_test_sb_md_interleaved 00:14:19.696 ************************************ 00:14:19.696 00:14:19.696 real 0m3.609s 00:14:19.696 user 0m5.210s 00:14:19.696 sys 0m0.625s 00:14:19.696 19:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.696 19:55:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:19.697 19:55:10 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:14:19.697 19:55:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:19.697 19:55:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.697 19:55:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.697 ************************************ 00:14:19.697 START TEST raid_superblock_test_md_interleaved 00:14:19.697 ************************************ 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:19.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=86079 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 86079 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 86079 ']' 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.697 19:55:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:19.697 [2024-11-26 19:55:10.568888] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:14:19.697 [2024-11-26 19:55:10.569189] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86079 ] 00:14:19.955 [2024-11-26 19:55:10.726010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.955 [2024-11-26 19:55:10.823481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.213 [2024-11-26 19:55:10.943772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.213 [2024-11-26 19:55:10.943819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:20.528 malloc1 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:20.528 [2024-11-26 19:55:11.440779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:20.528 [2024-11-26 19:55:11.440835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.528 [2024-11-26 19:55:11.440855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:20.528 [2024-11-26 19:55:11.440863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.528 [2024-11-26 19:55:11.442532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.528 [2024-11-26 19:55:11.442561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:20.528 pt1 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.528 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:20.788 malloc2 00:14:20.788 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.788 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:20.788 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.788 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:20.788 [2024-11-26 19:55:11.477872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:20.788 [2024-11-26 19:55:11.477917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.788 [2024-11-26 19:55:11.477934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:20.788 [2024-11-26 19:55:11.477942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.788 [2024-11-26 19:55:11.479603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.788 [2024-11-26 19:55:11.479630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:20.788 pt2 00:14:20.788 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.788 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:20.788 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:20.788 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:20.788 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.788 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:20.788 [2024-11-26 19:55:11.485901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:20.788 [2024-11-26 19:55:11.487531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:20.788 [2024-11-26 19:55:11.487689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:20.788 [2024-11-26 19:55:11.487699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:14:20.789 [2024-11-26 19:55:11.487761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:20.789 [2024-11-26 19:55:11.487819] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:20.789 [2024-11-26 19:55:11.487827] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:20.789 [2024-11-26 19:55:11.487880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.789 "name": "raid_bdev1", 00:14:20.789 "uuid": "f0815408-b442-48af-ba6c-5cbe33b38384", 00:14:20.789 "strip_size_kb": 0, 00:14:20.789 "state": "online", 00:14:20.789 "raid_level": "raid1", 00:14:20.789 "superblock": true, 00:14:20.789 "num_base_bdevs": 2, 00:14:20.789 "num_base_bdevs_discovered": 2, 00:14:20.789 "num_base_bdevs_operational": 2, 00:14:20.789 "base_bdevs_list": [ 00:14:20.789 { 00:14:20.789 "name": "pt1", 00:14:20.789 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:20.789 "is_configured": true, 00:14:20.789 "data_offset": 256, 00:14:20.789 "data_size": 7936 00:14:20.789 }, 00:14:20.789 { 00:14:20.789 "name": "pt2", 00:14:20.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:20.789 "is_configured": true, 00:14:20.789 "data_offset": 256, 00:14:20.789 "data_size": 7936 00:14:20.789 } 00:14:20.789 ] 00:14:20.789 }' 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.789 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.052 [2024-11-26 19:55:11.842232] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:21.052 "name": "raid_bdev1", 00:14:21.052 "aliases": [ 00:14:21.052 "f0815408-b442-48af-ba6c-5cbe33b38384" 00:14:21.052 ], 00:14:21.052 "product_name": "Raid Volume", 00:14:21.052 "block_size": 4128, 00:14:21.052 "num_blocks": 7936, 00:14:21.052 "uuid": "f0815408-b442-48af-ba6c-5cbe33b38384", 00:14:21.052 "md_size": 32, 00:14:21.052 "md_interleave": true, 00:14:21.052 "dif_type": 0, 00:14:21.052 "assigned_rate_limits": { 00:14:21.052 "rw_ios_per_sec": 0, 00:14:21.052 "rw_mbytes_per_sec": 0, 00:14:21.052 "r_mbytes_per_sec": 0, 00:14:21.052 "w_mbytes_per_sec": 0 00:14:21.052 }, 00:14:21.052 "claimed": false, 00:14:21.052 "zoned": false, 00:14:21.052 "supported_io_types": { 00:14:21.052 "read": true, 00:14:21.052 "write": true, 00:14:21.052 "unmap": false, 00:14:21.052 "flush": false, 00:14:21.052 "reset": true, 00:14:21.052 "nvme_admin": false, 00:14:21.052 "nvme_io": false, 00:14:21.052 "nvme_io_md": false, 00:14:21.052 "write_zeroes": true, 00:14:21.052 "zcopy": false, 00:14:21.052 "get_zone_info": false, 00:14:21.052 "zone_management": false, 00:14:21.052 "zone_append": false, 00:14:21.052 "compare": false, 00:14:21.052 "compare_and_write": false, 00:14:21.052 "abort": false, 00:14:21.052 "seek_hole": false, 00:14:21.052 "seek_data": false, 00:14:21.052 "copy": false, 00:14:21.052 "nvme_iov_md": false 00:14:21.052 }, 00:14:21.052 "memory_domains": [ 00:14:21.052 { 00:14:21.052 "dma_device_id": "system", 00:14:21.052 "dma_device_type": 1 00:14:21.052 }, 00:14:21.052 { 00:14:21.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.052 "dma_device_type": 2 00:14:21.052 }, 00:14:21.052 { 00:14:21.052 "dma_device_id": "system", 00:14:21.052 "dma_device_type": 1 00:14:21.052 }, 00:14:21.052 { 00:14:21.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.052 "dma_device_type": 2 00:14:21.052 } 00:14:21.052 ], 00:14:21.052 "driver_specific": { 00:14:21.052 "raid": { 00:14:21.052 "uuid": "f0815408-b442-48af-ba6c-5cbe33b38384", 00:14:21.052 "strip_size_kb": 0, 00:14:21.052 "state": "online", 00:14:21.052 "raid_level": "raid1", 00:14:21.052 "superblock": true, 00:14:21.052 "num_base_bdevs": 2, 00:14:21.052 "num_base_bdevs_discovered": 2, 00:14:21.052 "num_base_bdevs_operational": 2, 00:14:21.052 "base_bdevs_list": [ 00:14:21.052 { 00:14:21.052 "name": "pt1", 00:14:21.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:21.052 "is_configured": true, 00:14:21.052 "data_offset": 256, 00:14:21.052 "data_size": 7936 00:14:21.052 }, 00:14:21.052 { 00:14:21.052 "name": "pt2", 00:14:21.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.052 "is_configured": true, 00:14:21.052 "data_offset": 256, 00:14:21.052 "data_size": 7936 00:14:21.052 } 00:14:21.052 ] 00:14:21.052 } 00:14:21.052 } 00:14:21.052 }' 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:21.052 pt2' 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.052 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.312 19:55:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.312 [2024-11-26 19:55:12.022203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f0815408-b442-48af-ba6c-5cbe33b38384 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z f0815408-b442-48af-ba6c-5cbe33b38384 ']' 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.312 [2024-11-26 19:55:12.049951] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:21.312 [2024-11-26 19:55:12.049970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.312 [2024-11-26 19:55:12.050044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.312 [2024-11-26 19:55:12.050104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.312 [2024-11-26 19:55:12.050115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.312 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.312 [2024-11-26 19:55:12.145991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:21.312 [2024-11-26 19:55:12.147643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:21.312 [2024-11-26 19:55:12.147699] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:21.312 [2024-11-26 19:55:12.147745] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:21.313 [2024-11-26 19:55:12.147757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:21.313 [2024-11-26 19:55:12.147766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:21.313 request: 00:14:21.313 { 00:14:21.313 "name": "raid_bdev1", 00:14:21.313 "raid_level": "raid1", 00:14:21.313 "base_bdevs": [ 00:14:21.313 "malloc1", 00:14:21.313 "malloc2" 00:14:21.313 ], 00:14:21.313 "superblock": false, 00:14:21.313 "method": "bdev_raid_create", 00:14:21.313 "req_id": 1 00:14:21.313 } 00:14:21.313 Got JSON-RPC error response 00:14:21.313 response: 00:14:21.313 { 00:14:21.313 "code": -17, 00:14:21.313 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:21.313 } 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.313 [2024-11-26 19:55:12.193979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:21.313 [2024-11-26 19:55:12.194020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.313 [2024-11-26 19:55:12.194032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:21.313 [2024-11-26 19:55:12.194041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.313 [2024-11-26 19:55:12.195669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.313 [2024-11-26 19:55:12.195770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:21.313 [2024-11-26 19:55:12.195814] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:21.313 [2024-11-26 19:55:12.195858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:21.313 pt1 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.313 "name": "raid_bdev1", 00:14:21.313 "uuid": "f0815408-b442-48af-ba6c-5cbe33b38384", 00:14:21.313 "strip_size_kb": 0, 00:14:21.313 "state": "configuring", 00:14:21.313 "raid_level": "raid1", 00:14:21.313 "superblock": true, 00:14:21.313 "num_base_bdevs": 2, 00:14:21.313 "num_base_bdevs_discovered": 1, 00:14:21.313 "num_base_bdevs_operational": 2, 00:14:21.313 "base_bdevs_list": [ 00:14:21.313 { 00:14:21.313 "name": "pt1", 00:14:21.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:21.313 "is_configured": true, 00:14:21.313 "data_offset": 256, 00:14:21.313 "data_size": 7936 00:14:21.313 }, 00:14:21.313 { 00:14:21.313 "name": null, 00:14:21.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.313 "is_configured": false, 00:14:21.313 "data_offset": 256, 00:14:21.313 "data_size": 7936 00:14:21.313 } 00:14:21.313 ] 00:14:21.313 }' 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.313 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.599 [2024-11-26 19:55:12.498029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:21.599 [2024-11-26 19:55:12.498075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.599 [2024-11-26 19:55:12.498090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:21.599 [2024-11-26 19:55:12.498098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.599 [2024-11-26 19:55:12.498216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.599 [2024-11-26 19:55:12.498228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:21.599 [2024-11-26 19:55:12.498257] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:21.599 [2024-11-26 19:55:12.498275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:21.599 [2024-11-26 19:55:12.498353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:21.599 [2024-11-26 19:55:12.498363] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:14:21.599 [2024-11-26 19:55:12.498418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:21.599 [2024-11-26 19:55:12.498468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:21.599 [2024-11-26 19:55:12.498475] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:21.599 [2024-11-26 19:55:12.498521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.599 pt2 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.599 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.600 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.600 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.600 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.600 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.600 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:21.600 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.600 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.857 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.857 "name": "raid_bdev1", 00:14:21.857 "uuid": "f0815408-b442-48af-ba6c-5cbe33b38384", 00:14:21.857 "strip_size_kb": 0, 00:14:21.857 "state": "online", 00:14:21.857 "raid_level": "raid1", 00:14:21.857 "superblock": true, 00:14:21.857 "num_base_bdevs": 2, 00:14:21.857 "num_base_bdevs_discovered": 2, 00:14:21.857 "num_base_bdevs_operational": 2, 00:14:21.857 "base_bdevs_list": [ 00:14:21.857 { 00:14:21.857 "name": "pt1", 00:14:21.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:21.857 "is_configured": true, 00:14:21.857 "data_offset": 256, 00:14:21.857 "data_size": 7936 00:14:21.857 }, 00:14:21.857 { 00:14:21.857 "name": "pt2", 00:14:21.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.857 "is_configured": true, 00:14:21.857 "data_offset": 256, 00:14:21.857 "data_size": 7936 00:14:21.857 } 00:14:21.857 ] 00:14:21.857 }' 00:14:21.857 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.857 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.113 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:22.113 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:22.113 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:22.113 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:22.113 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:14:22.113 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:22.113 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:22.113 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:22.113 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.113 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.113 [2024-11-26 19:55:12.838397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.113 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.113 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:22.113 "name": "raid_bdev1", 00:14:22.113 "aliases": [ 00:14:22.113 "f0815408-b442-48af-ba6c-5cbe33b38384" 00:14:22.113 ], 00:14:22.113 "product_name": "Raid Volume", 00:14:22.113 "block_size": 4128, 00:14:22.113 "num_blocks": 7936, 00:14:22.113 "uuid": "f0815408-b442-48af-ba6c-5cbe33b38384", 00:14:22.113 "md_size": 32, 00:14:22.113 "md_interleave": true, 00:14:22.113 "dif_type": 0, 00:14:22.113 "assigned_rate_limits": { 00:14:22.113 "rw_ios_per_sec": 0, 00:14:22.113 "rw_mbytes_per_sec": 0, 00:14:22.113 "r_mbytes_per_sec": 0, 00:14:22.113 "w_mbytes_per_sec": 0 00:14:22.113 }, 00:14:22.113 "claimed": false, 00:14:22.113 "zoned": false, 00:14:22.113 "supported_io_types": { 00:14:22.113 "read": true, 00:14:22.113 "write": true, 00:14:22.113 "unmap": false, 00:14:22.113 "flush": false, 00:14:22.113 "reset": true, 00:14:22.113 "nvme_admin": false, 00:14:22.113 "nvme_io": false, 00:14:22.113 "nvme_io_md": false, 00:14:22.113 "write_zeroes": true, 00:14:22.113 "zcopy": false, 00:14:22.113 "get_zone_info": false, 00:14:22.113 "zone_management": false, 00:14:22.113 "zone_append": false, 00:14:22.113 "compare": false, 00:14:22.113 "compare_and_write": false, 00:14:22.113 "abort": false, 00:14:22.113 "seek_hole": false, 00:14:22.113 "seek_data": false, 00:14:22.113 "copy": false, 00:14:22.113 "nvme_iov_md": false 00:14:22.113 }, 00:14:22.113 "memory_domains": [ 00:14:22.113 { 00:14:22.113 "dma_device_id": "system", 00:14:22.113 "dma_device_type": 1 00:14:22.113 }, 00:14:22.113 { 00:14:22.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.113 "dma_device_type": 2 00:14:22.113 }, 00:14:22.114 { 00:14:22.114 "dma_device_id": "system", 00:14:22.114 "dma_device_type": 1 00:14:22.114 }, 00:14:22.114 { 00:14:22.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.114 "dma_device_type": 2 00:14:22.114 } 00:14:22.114 ], 00:14:22.114 "driver_specific": { 00:14:22.114 "raid": { 00:14:22.114 "uuid": "f0815408-b442-48af-ba6c-5cbe33b38384", 00:14:22.114 "strip_size_kb": 0, 00:14:22.114 "state": "online", 00:14:22.114 "raid_level": "raid1", 00:14:22.114 "superblock": true, 00:14:22.114 "num_base_bdevs": 2, 00:14:22.114 "num_base_bdevs_discovered": 2, 00:14:22.114 "num_base_bdevs_operational": 2, 00:14:22.114 "base_bdevs_list": [ 00:14:22.114 { 00:14:22.114 "name": "pt1", 00:14:22.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:22.114 "is_configured": true, 00:14:22.114 "data_offset": 256, 00:14:22.114 "data_size": 7936 00:14:22.114 }, 00:14:22.114 { 00:14:22.114 "name": "pt2", 00:14:22.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.114 "is_configured": true, 00:14:22.114 "data_offset": 256, 00:14:22.114 "data_size": 7936 00:14:22.114 } 00:14:22.114 ] 00:14:22.114 } 00:14:22.114 } 00:14:22.114 }' 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:22.114 pt2' 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.114 19:55:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:22.114 [2024-11-26 19:55:13.010402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' f0815408-b442-48af-ba6c-5cbe33b38384 '!=' f0815408-b442-48af-ba6c-5cbe33b38384 ']' 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.114 [2024-11-26 19:55:13.038178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.114 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.371 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.371 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.371 "name": "raid_bdev1", 00:14:22.371 "uuid": "f0815408-b442-48af-ba6c-5cbe33b38384", 00:14:22.371 "strip_size_kb": 0, 00:14:22.371 "state": "online", 00:14:22.371 "raid_level": "raid1", 00:14:22.371 "superblock": true, 00:14:22.371 "num_base_bdevs": 2, 00:14:22.371 "num_base_bdevs_discovered": 1, 00:14:22.371 "num_base_bdevs_operational": 1, 00:14:22.371 "base_bdevs_list": [ 00:14:22.371 { 00:14:22.371 "name": null, 00:14:22.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.371 "is_configured": false, 00:14:22.371 "data_offset": 0, 00:14:22.371 "data_size": 7936 00:14:22.371 }, 00:14:22.371 { 00:14:22.371 "name": "pt2", 00:14:22.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.371 "is_configured": true, 00:14:22.371 "data_offset": 256, 00:14:22.371 "data_size": 7936 00:14:22.371 } 00:14:22.371 ] 00:14:22.371 }' 00:14:22.371 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.371 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.635 [2024-11-26 19:55:13.350234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:22.635 [2024-11-26 19:55:13.350259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.635 [2024-11-26 19:55:13.350330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.635 [2024-11-26 19:55:13.350391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.635 [2024-11-26 19:55:13.350402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.635 [2024-11-26 19:55:13.402232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:22.635 [2024-11-26 19:55:13.402282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.635 [2024-11-26 19:55:13.402296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:22.635 [2024-11-26 19:55:13.402306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.635 [2024-11-26 19:55:13.404068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.635 [2024-11-26 19:55:13.404190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:22.635 [2024-11-26 19:55:13.404246] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:22.635 [2024-11-26 19:55:13.404292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:22.635 [2024-11-26 19:55:13.404368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:22.635 [2024-11-26 19:55:13.404380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:14:22.635 [2024-11-26 19:55:13.404468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:22.635 [2024-11-26 19:55:13.404521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:22.635 [2024-11-26 19:55:13.404527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:22.635 [2024-11-26 19:55:13.404583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.635 pt2 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.635 "name": "raid_bdev1", 00:14:22.635 "uuid": "f0815408-b442-48af-ba6c-5cbe33b38384", 00:14:22.635 "strip_size_kb": 0, 00:14:22.635 "state": "online", 00:14:22.635 "raid_level": "raid1", 00:14:22.635 "superblock": true, 00:14:22.635 "num_base_bdevs": 2, 00:14:22.635 "num_base_bdevs_discovered": 1, 00:14:22.635 "num_base_bdevs_operational": 1, 00:14:22.635 "base_bdevs_list": [ 00:14:22.635 { 00:14:22.635 "name": null, 00:14:22.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.635 "is_configured": false, 00:14:22.635 "data_offset": 256, 00:14:22.635 "data_size": 7936 00:14:22.635 }, 00:14:22.635 { 00:14:22.635 "name": "pt2", 00:14:22.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.635 "is_configured": true, 00:14:22.635 "data_offset": 256, 00:14:22.635 "data_size": 7936 00:14:22.635 } 00:14:22.635 ] 00:14:22.635 }' 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.635 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.897 [2024-11-26 19:55:13.734257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:22.897 [2024-11-26 19:55:13.734275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.897 [2024-11-26 19:55:13.734324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.897 [2024-11-26 19:55:13.734387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.897 [2024-11-26 19:55:13.734395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.897 [2024-11-26 19:55:13.778292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:22.897 [2024-11-26 19:55:13.778431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.897 [2024-11-26 19:55:13.778454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:22.897 [2024-11-26 19:55:13.778461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.897 [2024-11-26 19:55:13.780140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.897 [2024-11-26 19:55:13.780169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:22.897 [2024-11-26 19:55:13.780213] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:22.897 [2024-11-26 19:55:13.780251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:22.897 [2024-11-26 19:55:13.780330] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:22.897 [2024-11-26 19:55:13.780338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:22.897 [2024-11-26 19:55:13.780364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:22.897 [2024-11-26 19:55:13.780404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:22.897 [2024-11-26 19:55:13.780461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:22.897 [2024-11-26 19:55:13.780467] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:14:22.897 [2024-11-26 19:55:13.780525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:22.897 [2024-11-26 19:55:13.780570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:22.897 [2024-11-26 19:55:13.780581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:22.897 [2024-11-26 19:55:13.780637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.897 pt1 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.897 "name": "raid_bdev1", 00:14:22.897 "uuid": "f0815408-b442-48af-ba6c-5cbe33b38384", 00:14:22.897 "strip_size_kb": 0, 00:14:22.897 "state": "online", 00:14:22.897 "raid_level": "raid1", 00:14:22.897 "superblock": true, 00:14:22.897 "num_base_bdevs": 2, 00:14:22.897 "num_base_bdevs_discovered": 1, 00:14:22.897 "num_base_bdevs_operational": 1, 00:14:22.897 "base_bdevs_list": [ 00:14:22.897 { 00:14:22.897 "name": null, 00:14:22.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.897 "is_configured": false, 00:14:22.897 "data_offset": 256, 00:14:22.897 "data_size": 7936 00:14:22.897 }, 00:14:22.897 { 00:14:22.897 "name": "pt2", 00:14:22.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.897 "is_configured": true, 00:14:22.897 "data_offset": 256, 00:14:22.897 "data_size": 7936 00:14:22.897 } 00:14:22.897 ] 00:14:22.897 }' 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.897 19:55:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:23.185 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:23.185 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:23.185 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.185 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:23.185 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:23.443 [2024-11-26 19:55:14.138585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' f0815408-b442-48af-ba6c-5cbe33b38384 '!=' f0815408-b442-48af-ba6c-5cbe33b38384 ']' 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 86079 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 86079 ']' 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 86079 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86079 00:14:23.443 killing process with pid 86079 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86079' 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 86079 00:14:23.443 [2024-11-26 19:55:14.189378] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:23.443 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 86079 00:14:23.443 [2024-11-26 19:55:14.189455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.443 [2024-11-26 19:55:14.189500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.443 [2024-11-26 19:55:14.189513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:23.443 [2024-11-26 19:55:14.294911] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.007 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:14:24.007 00:14:24.007 real 0m4.398s 00:14:24.007 user 0m6.722s 00:14:24.007 sys 0m0.744s 00:14:24.007 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.007 19:55:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:24.007 ************************************ 00:14:24.007 END TEST raid_superblock_test_md_interleaved 00:14:24.007 ************************************ 00:14:24.007 19:55:14 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:14:24.007 19:55:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:24.007 19:55:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.007 19:55:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:24.007 ************************************ 00:14:24.007 START TEST raid_rebuild_test_sb_md_interleaved 00:14:24.007 ************************************ 00:14:24.007 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:14:24.007 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:24.007 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:24.007 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:24.007 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:24.007 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=86391 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 86391 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 86391 ']' 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:24.264 19:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:24.264 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:24.264 Zero copy mechanism will not be used. 00:14:24.264 [2024-11-26 19:55:15.016097] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:14:24.264 [2024-11-26 19:55:15.016227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86391 ] 00:14:24.264 [2024-11-26 19:55:15.177956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.521 [2024-11-26 19:55:15.293253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.521 [2024-11-26 19:55:15.439962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.521 [2024-11-26 19:55:15.440145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.087 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.087 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:14:25.087 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:25.087 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:14:25.087 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.087 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.087 BaseBdev1_malloc 00:14:25.087 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.087 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:25.087 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.087 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.087 [2024-11-26 19:55:15.892308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:25.087 [2024-11-26 19:55:15.892383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.087 [2024-11-26 19:55:15.892408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:25.087 [2024-11-26 19:55:15.892420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.087 [2024-11-26 19:55:15.894377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.087 [2024-11-26 19:55:15.894520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:25.087 BaseBdev1 00:14:25.087 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.088 BaseBdev2_malloc 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.088 [2024-11-26 19:55:15.930150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:25.088 [2024-11-26 19:55:15.930304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.088 [2024-11-26 19:55:15.930327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:25.088 [2024-11-26 19:55:15.930356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.088 [2024-11-26 19:55:15.932313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.088 [2024-11-26 19:55:15.932358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:25.088 BaseBdev2 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.088 spare_malloc 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.088 spare_delay 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.088 [2024-11-26 19:55:15.988114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:25.088 [2024-11-26 19:55:15.988267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.088 [2024-11-26 19:55:15.988293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:25.088 [2024-11-26 19:55:15.988304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.088 [2024-11-26 19:55:15.990300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.088 [2024-11-26 19:55:15.990337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:25.088 spare 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.088 19:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.088 [2024-11-26 19:55:15.996166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.088 [2024-11-26 19:55:15.998102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.088 [2024-11-26 19:55:15.998289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:25.088 [2024-11-26 19:55:15.998303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:14:25.088 [2024-11-26 19:55:15.998392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:25.088 [2024-11-26 19:55:15.998469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:25.088 [2024-11-26 19:55:15.998477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:25.088 [2024-11-26 19:55:15.998545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.088 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.347 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.347 "name": "raid_bdev1", 00:14:25.347 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:25.347 "strip_size_kb": 0, 00:14:25.347 "state": "online", 00:14:25.347 "raid_level": "raid1", 00:14:25.347 "superblock": true, 00:14:25.347 "num_base_bdevs": 2, 00:14:25.347 "num_base_bdevs_discovered": 2, 00:14:25.347 "num_base_bdevs_operational": 2, 00:14:25.347 "base_bdevs_list": [ 00:14:25.347 { 00:14:25.347 "name": "BaseBdev1", 00:14:25.347 "uuid": "c2074791-3488-5610-8d1b-fbae176c789e", 00:14:25.347 "is_configured": true, 00:14:25.347 "data_offset": 256, 00:14:25.347 "data_size": 7936 00:14:25.347 }, 00:14:25.347 { 00:14:25.347 "name": "BaseBdev2", 00:14:25.347 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:25.347 "is_configured": true, 00:14:25.347 "data_offset": 256, 00:14:25.347 "data_size": 7936 00:14:25.347 } 00:14:25.347 ] 00:14:25.347 }' 00:14:25.347 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.347 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.605 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:25.605 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.606 [2024-11-26 19:55:16.316595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.606 [2024-11-26 19:55:16.376226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.606 "name": "raid_bdev1", 00:14:25.606 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:25.606 "strip_size_kb": 0, 00:14:25.606 "state": "online", 00:14:25.606 "raid_level": "raid1", 00:14:25.606 "superblock": true, 00:14:25.606 "num_base_bdevs": 2, 00:14:25.606 "num_base_bdevs_discovered": 1, 00:14:25.606 "num_base_bdevs_operational": 1, 00:14:25.606 "base_bdevs_list": [ 00:14:25.606 { 00:14:25.606 "name": null, 00:14:25.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.606 "is_configured": false, 00:14:25.606 "data_offset": 0, 00:14:25.606 "data_size": 7936 00:14:25.606 }, 00:14:25.606 { 00:14:25.606 "name": "BaseBdev2", 00:14:25.606 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:25.606 "is_configured": true, 00:14:25.606 "data_offset": 256, 00:14:25.606 "data_size": 7936 00:14:25.606 } 00:14:25.606 ] 00:14:25.606 }' 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.606 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.864 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:25.864 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.864 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:25.864 [2024-11-26 19:55:16.692367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.864 [2024-11-26 19:55:16.704862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:25.864 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.864 19:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:25.864 [2024-11-26 19:55:16.706841] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.799 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.799 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.799 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.799 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.799 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.799 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.799 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.799 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:26.799 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.799 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.080 "name": "raid_bdev1", 00:14:27.080 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:27.080 "strip_size_kb": 0, 00:14:27.080 "state": "online", 00:14:27.080 "raid_level": "raid1", 00:14:27.080 "superblock": true, 00:14:27.080 "num_base_bdevs": 2, 00:14:27.080 "num_base_bdevs_discovered": 2, 00:14:27.080 "num_base_bdevs_operational": 2, 00:14:27.080 "process": { 00:14:27.080 "type": "rebuild", 00:14:27.080 "target": "spare", 00:14:27.080 "progress": { 00:14:27.080 "blocks": 2560, 00:14:27.080 "percent": 32 00:14:27.080 } 00:14:27.080 }, 00:14:27.080 "base_bdevs_list": [ 00:14:27.080 { 00:14:27.080 "name": "spare", 00:14:27.080 "uuid": "f3ce3afe-8f3e-51dc-bf6f-75d5bfbbf084", 00:14:27.080 "is_configured": true, 00:14:27.080 "data_offset": 256, 00:14:27.080 "data_size": 7936 00:14:27.080 }, 00:14:27.080 { 00:14:27.080 "name": "BaseBdev2", 00:14:27.080 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:27.080 "is_configured": true, 00:14:27.080 "data_offset": 256, 00:14:27.080 "data_size": 7936 00:14:27.080 } 00:14:27.080 ] 00:14:27.080 }' 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:27.080 [2024-11-26 19:55:17.820767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.080 [2024-11-26 19:55:17.913813] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:27.080 [2024-11-26 19:55:17.913969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.080 [2024-11-26 19:55:17.914024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.080 [2024-11-26 19:55:17.914050] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.080 "name": "raid_bdev1", 00:14:27.080 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:27.080 "strip_size_kb": 0, 00:14:27.080 "state": "online", 00:14:27.080 "raid_level": "raid1", 00:14:27.080 "superblock": true, 00:14:27.080 "num_base_bdevs": 2, 00:14:27.080 "num_base_bdevs_discovered": 1, 00:14:27.080 "num_base_bdevs_operational": 1, 00:14:27.080 "base_bdevs_list": [ 00:14:27.080 { 00:14:27.080 "name": null, 00:14:27.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.080 "is_configured": false, 00:14:27.080 "data_offset": 0, 00:14:27.080 "data_size": 7936 00:14:27.080 }, 00:14:27.080 { 00:14:27.080 "name": "BaseBdev2", 00:14:27.080 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:27.080 "is_configured": true, 00:14:27.080 "data_offset": 256, 00:14:27.080 "data_size": 7936 00:14:27.080 } 00:14:27.080 ] 00:14:27.080 }' 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.080 19:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:27.339 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.339 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.339 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.339 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.339 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.339 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.339 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.339 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.339 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:27.339 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.598 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.598 "name": "raid_bdev1", 00:14:27.598 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:27.598 "strip_size_kb": 0, 00:14:27.598 "state": "online", 00:14:27.598 "raid_level": "raid1", 00:14:27.598 "superblock": true, 00:14:27.598 "num_base_bdevs": 2, 00:14:27.598 "num_base_bdevs_discovered": 1, 00:14:27.598 "num_base_bdevs_operational": 1, 00:14:27.598 "base_bdevs_list": [ 00:14:27.598 { 00:14:27.598 "name": null, 00:14:27.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.598 "is_configured": false, 00:14:27.598 "data_offset": 0, 00:14:27.598 "data_size": 7936 00:14:27.598 }, 00:14:27.598 { 00:14:27.598 "name": "BaseBdev2", 00:14:27.598 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:27.598 "is_configured": true, 00:14:27.598 "data_offset": 256, 00:14:27.598 "data_size": 7936 00:14:27.598 } 00:14:27.598 ] 00:14:27.598 }' 00:14:27.598 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.598 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.598 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.598 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.598 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:27.598 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.598 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:27.598 [2024-11-26 19:55:18.365818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.598 [2024-11-26 19:55:18.375538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:27.598 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.598 19:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:27.598 [2024-11-26 19:55:18.377205] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.531 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.531 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.531 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.531 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.531 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.531 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.531 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.531 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:28.531 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.531 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.531 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.531 "name": "raid_bdev1", 00:14:28.531 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:28.531 "strip_size_kb": 0, 00:14:28.531 "state": "online", 00:14:28.531 "raid_level": "raid1", 00:14:28.531 "superblock": true, 00:14:28.531 "num_base_bdevs": 2, 00:14:28.531 "num_base_bdevs_discovered": 2, 00:14:28.531 "num_base_bdevs_operational": 2, 00:14:28.531 "process": { 00:14:28.531 "type": "rebuild", 00:14:28.531 "target": "spare", 00:14:28.531 "progress": { 00:14:28.531 "blocks": 2560, 00:14:28.531 "percent": 32 00:14:28.531 } 00:14:28.531 }, 00:14:28.531 "base_bdevs_list": [ 00:14:28.531 { 00:14:28.531 "name": "spare", 00:14:28.531 "uuid": "f3ce3afe-8f3e-51dc-bf6f-75d5bfbbf084", 00:14:28.531 "is_configured": true, 00:14:28.531 "data_offset": 256, 00:14:28.531 "data_size": 7936 00:14:28.531 }, 00:14:28.531 { 00:14:28.531 "name": "BaseBdev2", 00:14:28.531 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:28.531 "is_configured": true, 00:14:28.531 "data_offset": 256, 00:14:28.531 "data_size": 7936 00:14:28.531 } 00:14:28.531 ] 00:14:28.531 }' 00:14:28.531 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.531 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.531 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.789 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.789 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:28.789 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:28.789 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=589 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.790 "name": "raid_bdev1", 00:14:28.790 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:28.790 "strip_size_kb": 0, 00:14:28.790 "state": "online", 00:14:28.790 "raid_level": "raid1", 00:14:28.790 "superblock": true, 00:14:28.790 "num_base_bdevs": 2, 00:14:28.790 "num_base_bdevs_discovered": 2, 00:14:28.790 "num_base_bdevs_operational": 2, 00:14:28.790 "process": { 00:14:28.790 "type": "rebuild", 00:14:28.790 "target": "spare", 00:14:28.790 "progress": { 00:14:28.790 "blocks": 2816, 00:14:28.790 "percent": 35 00:14:28.790 } 00:14:28.790 }, 00:14:28.790 "base_bdevs_list": [ 00:14:28.790 { 00:14:28.790 "name": "spare", 00:14:28.790 "uuid": "f3ce3afe-8f3e-51dc-bf6f-75d5bfbbf084", 00:14:28.790 "is_configured": true, 00:14:28.790 "data_offset": 256, 00:14:28.790 "data_size": 7936 00:14:28.790 }, 00:14:28.790 { 00:14:28.790 "name": "BaseBdev2", 00:14:28.790 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:28.790 "is_configured": true, 00:14:28.790 "data_offset": 256, 00:14:28.790 "data_size": 7936 00:14:28.790 } 00:14:28.790 ] 00:14:28.790 }' 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.790 19:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.724 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.724 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.724 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.724 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.724 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.724 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.724 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.724 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.724 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.724 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:29.724 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.724 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.724 "name": "raid_bdev1", 00:14:29.724 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:29.724 "strip_size_kb": 0, 00:14:29.724 "state": "online", 00:14:29.724 "raid_level": "raid1", 00:14:29.724 "superblock": true, 00:14:29.724 "num_base_bdevs": 2, 00:14:29.724 "num_base_bdevs_discovered": 2, 00:14:29.724 "num_base_bdevs_operational": 2, 00:14:29.724 "process": { 00:14:29.724 "type": "rebuild", 00:14:29.724 "target": "spare", 00:14:29.724 "progress": { 00:14:29.724 "blocks": 5632, 00:14:29.724 "percent": 70 00:14:29.724 } 00:14:29.724 }, 00:14:29.724 "base_bdevs_list": [ 00:14:29.724 { 00:14:29.724 "name": "spare", 00:14:29.724 "uuid": "f3ce3afe-8f3e-51dc-bf6f-75d5bfbbf084", 00:14:29.724 "is_configured": true, 00:14:29.724 "data_offset": 256, 00:14:29.724 "data_size": 7936 00:14:29.724 }, 00:14:29.724 { 00:14:29.724 "name": "BaseBdev2", 00:14:29.724 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:29.724 "is_configured": true, 00:14:29.724 "data_offset": 256, 00:14:29.724 "data_size": 7936 00:14:29.724 } 00:14:29.724 ] 00:14:29.724 }' 00:14:29.724 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.981 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.981 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.981 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.981 19:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.914 [2024-11-26 19:55:21.494212] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:30.914 [2024-11-26 19:55:21.494287] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:30.914 [2024-11-26 19:55:21.494401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.914 "name": "raid_bdev1", 00:14:30.914 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:30.914 "strip_size_kb": 0, 00:14:30.914 "state": "online", 00:14:30.914 "raid_level": "raid1", 00:14:30.914 "superblock": true, 00:14:30.914 "num_base_bdevs": 2, 00:14:30.914 "num_base_bdevs_discovered": 2, 00:14:30.914 "num_base_bdevs_operational": 2, 00:14:30.914 "base_bdevs_list": [ 00:14:30.914 { 00:14:30.914 "name": "spare", 00:14:30.914 "uuid": "f3ce3afe-8f3e-51dc-bf6f-75d5bfbbf084", 00:14:30.914 "is_configured": true, 00:14:30.914 "data_offset": 256, 00:14:30.914 "data_size": 7936 00:14:30.914 }, 00:14:30.914 { 00:14:30.914 "name": "BaseBdev2", 00:14:30.914 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:30.914 "is_configured": true, 00:14:30.914 "data_offset": 256, 00:14:30.914 "data_size": 7936 00:14:30.914 } 00:14:30.914 ] 00:14:30.914 }' 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:30.914 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.915 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.915 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.915 "name": "raid_bdev1", 00:14:30.915 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:30.915 "strip_size_kb": 0, 00:14:30.915 "state": "online", 00:14:30.915 "raid_level": "raid1", 00:14:30.915 "superblock": true, 00:14:30.915 "num_base_bdevs": 2, 00:14:30.915 "num_base_bdevs_discovered": 2, 00:14:30.915 "num_base_bdevs_operational": 2, 00:14:30.915 "base_bdevs_list": [ 00:14:30.915 { 00:14:30.915 "name": "spare", 00:14:30.915 "uuid": "f3ce3afe-8f3e-51dc-bf6f-75d5bfbbf084", 00:14:30.915 "is_configured": true, 00:14:30.915 "data_offset": 256, 00:14:30.915 "data_size": 7936 00:14:30.915 }, 00:14:30.915 { 00:14:30.915 "name": "BaseBdev2", 00:14:30.915 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:30.915 "is_configured": true, 00:14:30.915 "data_offset": 256, 00:14:30.915 "data_size": 7936 00:14:30.915 } 00:14:30.915 ] 00:14:30.915 }' 00:14:30.915 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.173 "name": "raid_bdev1", 00:14:31.173 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:31.173 "strip_size_kb": 0, 00:14:31.173 "state": "online", 00:14:31.173 "raid_level": "raid1", 00:14:31.173 "superblock": true, 00:14:31.173 "num_base_bdevs": 2, 00:14:31.173 "num_base_bdevs_discovered": 2, 00:14:31.173 "num_base_bdevs_operational": 2, 00:14:31.173 "base_bdevs_list": [ 00:14:31.173 { 00:14:31.173 "name": "spare", 00:14:31.173 "uuid": "f3ce3afe-8f3e-51dc-bf6f-75d5bfbbf084", 00:14:31.173 "is_configured": true, 00:14:31.173 "data_offset": 256, 00:14:31.173 "data_size": 7936 00:14:31.173 }, 00:14:31.173 { 00:14:31.173 "name": "BaseBdev2", 00:14:31.173 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:31.173 "is_configured": true, 00:14:31.173 "data_offset": 256, 00:14:31.173 "data_size": 7936 00:14:31.173 } 00:14:31.173 ] 00:14:31.173 }' 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.173 19:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:31.431 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.431 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.431 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:31.431 [2024-11-26 19:55:22.257423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.431 [2024-11-26 19:55:22.257536] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.431 [2024-11-26 19:55:22.257673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.431 [2024-11-26 19:55:22.257760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.431 [2024-11-26 19:55:22.257821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:31.431 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.431 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:14:31.431 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:31.432 [2024-11-26 19:55:22.309414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:31.432 [2024-11-26 19:55:22.309461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.432 [2024-11-26 19:55:22.309481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:31.432 [2024-11-26 19:55:22.309489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.432 [2024-11-26 19:55:22.311297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.432 [2024-11-26 19:55:22.311326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:31.432 [2024-11-26 19:55:22.311387] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:31.432 [2024-11-26 19:55:22.311430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:31.432 [2024-11-26 19:55:22.311530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:31.432 spare 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.432 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:31.689 [2024-11-26 19:55:22.411612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:31.689 [2024-11-26 19:55:22.411636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:14:31.689 [2024-11-26 19:55:22.411732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:31.689 [2024-11-26 19:55:22.411815] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:31.689 [2024-11-26 19:55:22.411824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:31.689 [2024-11-26 19:55:22.411904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.689 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.689 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:31.689 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.689 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.689 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.689 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.689 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.689 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.689 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.689 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.689 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.690 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.690 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.690 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.690 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:31.690 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.690 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.690 "name": "raid_bdev1", 00:14:31.690 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:31.690 "strip_size_kb": 0, 00:14:31.690 "state": "online", 00:14:31.690 "raid_level": "raid1", 00:14:31.690 "superblock": true, 00:14:31.690 "num_base_bdevs": 2, 00:14:31.690 "num_base_bdevs_discovered": 2, 00:14:31.690 "num_base_bdevs_operational": 2, 00:14:31.690 "base_bdevs_list": [ 00:14:31.690 { 00:14:31.690 "name": "spare", 00:14:31.690 "uuid": "f3ce3afe-8f3e-51dc-bf6f-75d5bfbbf084", 00:14:31.690 "is_configured": true, 00:14:31.690 "data_offset": 256, 00:14:31.690 "data_size": 7936 00:14:31.690 }, 00:14:31.690 { 00:14:31.690 "name": "BaseBdev2", 00:14:31.690 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:31.690 "is_configured": true, 00:14:31.690 "data_offset": 256, 00:14:31.690 "data_size": 7936 00:14:31.690 } 00:14:31.690 ] 00:14:31.690 }' 00:14:31.690 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.690 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.948 "name": "raid_bdev1", 00:14:31.948 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:31.948 "strip_size_kb": 0, 00:14:31.948 "state": "online", 00:14:31.948 "raid_level": "raid1", 00:14:31.948 "superblock": true, 00:14:31.948 "num_base_bdevs": 2, 00:14:31.948 "num_base_bdevs_discovered": 2, 00:14:31.948 "num_base_bdevs_operational": 2, 00:14:31.948 "base_bdevs_list": [ 00:14:31.948 { 00:14:31.948 "name": "spare", 00:14:31.948 "uuid": "f3ce3afe-8f3e-51dc-bf6f-75d5bfbbf084", 00:14:31.948 "is_configured": true, 00:14:31.948 "data_offset": 256, 00:14:31.948 "data_size": 7936 00:14:31.948 }, 00:14:31.948 { 00:14:31.948 "name": "BaseBdev2", 00:14:31.948 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:31.948 "is_configured": true, 00:14:31.948 "data_offset": 256, 00:14:31.948 "data_size": 7936 00:14:31.948 } 00:14:31.948 ] 00:14:31.948 }' 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:31.948 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:32.206 [2024-11-26 19:55:22.901610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.206 "name": "raid_bdev1", 00:14:32.206 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:32.206 "strip_size_kb": 0, 00:14:32.206 "state": "online", 00:14:32.206 "raid_level": "raid1", 00:14:32.206 "superblock": true, 00:14:32.206 "num_base_bdevs": 2, 00:14:32.206 "num_base_bdevs_discovered": 1, 00:14:32.206 "num_base_bdevs_operational": 1, 00:14:32.206 "base_bdevs_list": [ 00:14:32.206 { 00:14:32.206 "name": null, 00:14:32.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.206 "is_configured": false, 00:14:32.206 "data_offset": 0, 00:14:32.206 "data_size": 7936 00:14:32.206 }, 00:14:32.206 { 00:14:32.206 "name": "BaseBdev2", 00:14:32.206 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:32.206 "is_configured": true, 00:14:32.206 "data_offset": 256, 00:14:32.206 "data_size": 7936 00:14:32.206 } 00:14:32.206 ] 00:14:32.206 }' 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.206 19:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:32.464 19:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:32.464 19:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.464 19:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:32.464 [2024-11-26 19:55:23.213653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.464 [2024-11-26 19:55:23.213840] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:32.464 [2024-11-26 19:55:23.213853] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:32.464 [2024-11-26 19:55:23.213888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.464 [2024-11-26 19:55:23.223107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:32.464 19:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.464 19:55:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:32.464 [2024-11-26 19:55:23.224860] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.442 "name": "raid_bdev1", 00:14:33.442 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:33.442 "strip_size_kb": 0, 00:14:33.442 "state": "online", 00:14:33.442 "raid_level": "raid1", 00:14:33.442 "superblock": true, 00:14:33.442 "num_base_bdevs": 2, 00:14:33.442 "num_base_bdevs_discovered": 2, 00:14:33.442 "num_base_bdevs_operational": 2, 00:14:33.442 "process": { 00:14:33.442 "type": "rebuild", 00:14:33.442 "target": "spare", 00:14:33.442 "progress": { 00:14:33.442 "blocks": 2560, 00:14:33.442 "percent": 32 00:14:33.442 } 00:14:33.442 }, 00:14:33.442 "base_bdevs_list": [ 00:14:33.442 { 00:14:33.442 "name": "spare", 00:14:33.442 "uuid": "f3ce3afe-8f3e-51dc-bf6f-75d5bfbbf084", 00:14:33.442 "is_configured": true, 00:14:33.442 "data_offset": 256, 00:14:33.442 "data_size": 7936 00:14:33.442 }, 00:14:33.442 { 00:14:33.442 "name": "BaseBdev2", 00:14:33.442 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:33.442 "is_configured": true, 00:14:33.442 "data_offset": 256, 00:14:33.442 "data_size": 7936 00:14:33.442 } 00:14:33.442 ] 00:14:33.442 }' 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.442 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:33.442 [2024-11-26 19:55:24.343109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.699 [2024-11-26 19:55:24.431571] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:33.699 [2024-11-26 19:55:24.431628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.699 [2024-11-26 19:55:24.431641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.699 [2024-11-26 19:55:24.431648] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.699 "name": "raid_bdev1", 00:14:33.699 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:33.699 "strip_size_kb": 0, 00:14:33.699 "state": "online", 00:14:33.699 "raid_level": "raid1", 00:14:33.699 "superblock": true, 00:14:33.699 "num_base_bdevs": 2, 00:14:33.699 "num_base_bdevs_discovered": 1, 00:14:33.699 "num_base_bdevs_operational": 1, 00:14:33.699 "base_bdevs_list": [ 00:14:33.699 { 00:14:33.699 "name": null, 00:14:33.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.699 "is_configured": false, 00:14:33.699 "data_offset": 0, 00:14:33.699 "data_size": 7936 00:14:33.699 }, 00:14:33.699 { 00:14:33.699 "name": "BaseBdev2", 00:14:33.699 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:33.699 "is_configured": true, 00:14:33.699 "data_offset": 256, 00:14:33.699 "data_size": 7936 00:14:33.699 } 00:14:33.699 ] 00:14:33.699 }' 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.699 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:33.957 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:33.957 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.957 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:33.957 [2024-11-26 19:55:24.790757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:33.957 [2024-11-26 19:55:24.790824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.957 [2024-11-26 19:55:24.790848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:33.957 [2024-11-26 19:55:24.790858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.957 [2024-11-26 19:55:24.791063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.957 [2024-11-26 19:55:24.791076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:33.957 [2024-11-26 19:55:24.791130] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:33.957 [2024-11-26 19:55:24.791143] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:33.957 [2024-11-26 19:55:24.791152] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:33.957 [2024-11-26 19:55:24.791170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.957 [2024-11-26 19:55:24.800329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:33.957 spare 00:14:33.957 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.957 19:55:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:33.957 [2024-11-26 19:55:24.801998] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:34.891 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.891 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.891 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.891 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.891 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.891 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.891 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.891 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.891 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:34.891 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.149 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.149 "name": "raid_bdev1", 00:14:35.149 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:35.149 "strip_size_kb": 0, 00:14:35.149 "state": "online", 00:14:35.149 "raid_level": "raid1", 00:14:35.149 "superblock": true, 00:14:35.149 "num_base_bdevs": 2, 00:14:35.149 "num_base_bdevs_discovered": 2, 00:14:35.149 "num_base_bdevs_operational": 2, 00:14:35.149 "process": { 00:14:35.149 "type": "rebuild", 00:14:35.149 "target": "spare", 00:14:35.149 "progress": { 00:14:35.149 "blocks": 2560, 00:14:35.149 "percent": 32 00:14:35.149 } 00:14:35.149 }, 00:14:35.149 "base_bdevs_list": [ 00:14:35.149 { 00:14:35.149 "name": "spare", 00:14:35.149 "uuid": "f3ce3afe-8f3e-51dc-bf6f-75d5bfbbf084", 00:14:35.149 "is_configured": true, 00:14:35.149 "data_offset": 256, 00:14:35.149 "data_size": 7936 00:14:35.149 }, 00:14:35.149 { 00:14:35.149 "name": "BaseBdev2", 00:14:35.149 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:35.149 "is_configured": true, 00:14:35.149 "data_offset": 256, 00:14:35.149 "data_size": 7936 00:14:35.149 } 00:14:35.149 ] 00:14:35.149 }' 00:14:35.149 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.149 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.149 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.149 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.149 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:35.150 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.150 19:55:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:35.150 [2024-11-26 19:55:25.912222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.150 [2024-11-26 19:55:26.008741] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:35.150 [2024-11-26 19:55:26.008793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.150 [2024-11-26 19:55:26.008809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.150 [2024-11-26 19:55:26.008815] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.150 "name": "raid_bdev1", 00:14:35.150 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:35.150 "strip_size_kb": 0, 00:14:35.150 "state": "online", 00:14:35.150 "raid_level": "raid1", 00:14:35.150 "superblock": true, 00:14:35.150 "num_base_bdevs": 2, 00:14:35.150 "num_base_bdevs_discovered": 1, 00:14:35.150 "num_base_bdevs_operational": 1, 00:14:35.150 "base_bdevs_list": [ 00:14:35.150 { 00:14:35.150 "name": null, 00:14:35.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.150 "is_configured": false, 00:14:35.150 "data_offset": 0, 00:14:35.150 "data_size": 7936 00:14:35.150 }, 00:14:35.150 { 00:14:35.150 "name": "BaseBdev2", 00:14:35.150 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:35.150 "is_configured": true, 00:14:35.150 "data_offset": 256, 00:14:35.150 "data_size": 7936 00:14:35.150 } 00:14:35.150 ] 00:14:35.150 }' 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.150 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:35.408 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:35.408 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.408 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:35.408 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:35.408 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.667 "name": "raid_bdev1", 00:14:35.667 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:35.667 "strip_size_kb": 0, 00:14:35.667 "state": "online", 00:14:35.667 "raid_level": "raid1", 00:14:35.667 "superblock": true, 00:14:35.667 "num_base_bdevs": 2, 00:14:35.667 "num_base_bdevs_discovered": 1, 00:14:35.667 "num_base_bdevs_operational": 1, 00:14:35.667 "base_bdevs_list": [ 00:14:35.667 { 00:14:35.667 "name": null, 00:14:35.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.667 "is_configured": false, 00:14:35.667 "data_offset": 0, 00:14:35.667 "data_size": 7936 00:14:35.667 }, 00:14:35.667 { 00:14:35.667 "name": "BaseBdev2", 00:14:35.667 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:35.667 "is_configured": true, 00:14:35.667 "data_offset": 256, 00:14:35.667 "data_size": 7936 00:14:35.667 } 00:14:35.667 ] 00:14:35.667 }' 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.667 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:35.667 [2024-11-26 19:55:26.448051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:35.668 [2024-11-26 19:55:26.448183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.668 [2024-11-26 19:55:26.448219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:35.668 [2024-11-26 19:55:26.448265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.668 [2024-11-26 19:55:26.448453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.668 [2024-11-26 19:55:26.448525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:35.668 [2024-11-26 19:55:26.448588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:35.668 [2024-11-26 19:55:26.448609] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:35.668 [2024-11-26 19:55:26.448667] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:35.668 [2024-11-26 19:55:26.448685] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:35.668 BaseBdev1 00:14:35.668 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.668 19:55:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.602 "name": "raid_bdev1", 00:14:36.602 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:36.602 "strip_size_kb": 0, 00:14:36.602 "state": "online", 00:14:36.602 "raid_level": "raid1", 00:14:36.602 "superblock": true, 00:14:36.602 "num_base_bdevs": 2, 00:14:36.602 "num_base_bdevs_discovered": 1, 00:14:36.602 "num_base_bdevs_operational": 1, 00:14:36.602 "base_bdevs_list": [ 00:14:36.602 { 00:14:36.602 "name": null, 00:14:36.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.602 "is_configured": false, 00:14:36.602 "data_offset": 0, 00:14:36.602 "data_size": 7936 00:14:36.602 }, 00:14:36.602 { 00:14:36.602 "name": "BaseBdev2", 00:14:36.602 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:36.602 "is_configured": true, 00:14:36.602 "data_offset": 256, 00:14:36.602 "data_size": 7936 00:14:36.602 } 00:14:36.602 ] 00:14:36.602 }' 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.602 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.169 "name": "raid_bdev1", 00:14:37.169 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:37.169 "strip_size_kb": 0, 00:14:37.169 "state": "online", 00:14:37.169 "raid_level": "raid1", 00:14:37.169 "superblock": true, 00:14:37.169 "num_base_bdevs": 2, 00:14:37.169 "num_base_bdevs_discovered": 1, 00:14:37.169 "num_base_bdevs_operational": 1, 00:14:37.169 "base_bdevs_list": [ 00:14:37.169 { 00:14:37.169 "name": null, 00:14:37.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.169 "is_configured": false, 00:14:37.169 "data_offset": 0, 00:14:37.169 "data_size": 7936 00:14:37.169 }, 00:14:37.169 { 00:14:37.169 "name": "BaseBdev2", 00:14:37.169 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:37.169 "is_configured": true, 00:14:37.169 "data_offset": 256, 00:14:37.169 "data_size": 7936 00:14:37.169 } 00:14:37.169 ] 00:14:37.169 }' 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.169 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:37.170 [2024-11-26 19:55:27.904366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.170 [2024-11-26 19:55:27.904520] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:37.170 [2024-11-26 19:55:27.904534] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:37.170 request: 00:14:37.170 { 00:14:37.170 "base_bdev": "BaseBdev1", 00:14:37.170 "raid_bdev": "raid_bdev1", 00:14:37.170 "method": "bdev_raid_add_base_bdev", 00:14:37.170 "req_id": 1 00:14:37.170 } 00:14:37.170 Got JSON-RPC error response 00:14:37.170 response: 00:14:37.170 { 00:14:37.170 "code": -22, 00:14:37.170 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:37.170 } 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:37.170 19:55:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.102 "name": "raid_bdev1", 00:14:38.102 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:38.102 "strip_size_kb": 0, 00:14:38.102 "state": "online", 00:14:38.102 "raid_level": "raid1", 00:14:38.102 "superblock": true, 00:14:38.102 "num_base_bdevs": 2, 00:14:38.102 "num_base_bdevs_discovered": 1, 00:14:38.102 "num_base_bdevs_operational": 1, 00:14:38.102 "base_bdevs_list": [ 00:14:38.102 { 00:14:38.102 "name": null, 00:14:38.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.102 "is_configured": false, 00:14:38.102 "data_offset": 0, 00:14:38.102 "data_size": 7936 00:14:38.102 }, 00:14:38.102 { 00:14:38.102 "name": "BaseBdev2", 00:14:38.102 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:38.102 "is_configured": true, 00:14:38.102 "data_offset": 256, 00:14:38.102 "data_size": 7936 00:14:38.102 } 00:14:38.102 ] 00:14:38.102 }' 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.102 19:55:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:38.360 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.360 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.360 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.360 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.360 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.360 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.360 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.360 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:38.360 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.360 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.360 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.360 "name": "raid_bdev1", 00:14:38.360 "uuid": "e8b56dfd-dffd-4945-8fb2-d37e1cfeadb1", 00:14:38.360 "strip_size_kb": 0, 00:14:38.360 "state": "online", 00:14:38.360 "raid_level": "raid1", 00:14:38.360 "superblock": true, 00:14:38.360 "num_base_bdevs": 2, 00:14:38.360 "num_base_bdevs_discovered": 1, 00:14:38.360 "num_base_bdevs_operational": 1, 00:14:38.360 "base_bdevs_list": [ 00:14:38.360 { 00:14:38.360 "name": null, 00:14:38.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.360 "is_configured": false, 00:14:38.360 "data_offset": 0, 00:14:38.360 "data_size": 7936 00:14:38.360 }, 00:14:38.360 { 00:14:38.360 "name": "BaseBdev2", 00:14:38.360 "uuid": "3a73ca9b-4981-5992-8e68-6e936394c057", 00:14:38.360 "is_configured": true, 00:14:38.360 "data_offset": 256, 00:14:38.360 "data_size": 7936 00:14:38.360 } 00:14:38.360 ] 00:14:38.360 }' 00:14:38.360 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.360 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.360 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.618 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.618 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 86391 00:14:38.618 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 86391 ']' 00:14:38.618 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 86391 00:14:38.618 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:14:38.618 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.618 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86391 00:14:38.618 killing process with pid 86391 00:14:38.618 Received shutdown signal, test time was about 60.000000 seconds 00:14:38.618 00:14:38.618 Latency(us) 00:14:38.618 [2024-11-26T19:55:29.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.618 [2024-11-26T19:55:29.555Z] =================================================================================================================== 00:14:38.618 [2024-11-26T19:55:29.555Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:38.618 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.618 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.618 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86391' 00:14:38.618 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 86391 00:14:38.618 19:55:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 86391 00:14:38.618 [2024-11-26 19:55:29.331191] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:38.618 [2024-11-26 19:55:29.331305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.618 [2024-11-26 19:55:29.331359] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.618 [2024-11-26 19:55:29.331372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:38.618 [2024-11-26 19:55:29.483631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.183 19:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:14:39.183 ************************************ 00:14:39.183 END TEST raid_rebuild_test_sb_md_interleaved 00:14:39.183 ************************************ 00:14:39.183 00:14:39.183 real 0m15.122s 00:14:39.183 user 0m19.263s 00:14:39.183 sys 0m1.089s 00:14:39.184 19:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.184 19:55:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:39.184 19:55:30 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:14:39.184 19:55:30 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:14:39.184 19:55:30 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 86391 ']' 00:14:39.184 19:55:30 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 86391 00:14:39.184 19:55:30 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:14:39.443 00:14:39.443 real 9m29.401s 00:14:39.443 user 12m36.739s 00:14:39.443 sys 1m22.659s 00:14:39.443 ************************************ 00:14:39.443 END TEST bdev_raid 00:14:39.443 ************************************ 00:14:39.443 19:55:30 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.443 19:55:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.443 19:55:30 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:14:39.443 19:55:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:39.443 19:55:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.443 19:55:30 -- common/autotest_common.sh@10 -- # set +x 00:14:39.443 ************************************ 00:14:39.443 START TEST spdkcli_raid 00:14:39.443 ************************************ 00:14:39.443 19:55:30 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:14:39.443 * Looking for test storage... 00:14:39.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:14:39.443 19:55:30 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:39.443 19:55:30 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:39.443 19:55:30 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:39.443 19:55:30 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:39.443 19:55:30 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:14:39.443 19:55:30 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:39.443 19:55:30 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:39.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.443 --rc genhtml_branch_coverage=1 00:14:39.443 --rc genhtml_function_coverage=1 00:14:39.443 --rc genhtml_legend=1 00:14:39.443 --rc geninfo_all_blocks=1 00:14:39.443 --rc geninfo_unexecuted_blocks=1 00:14:39.443 00:14:39.443 ' 00:14:39.443 19:55:30 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:39.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.443 --rc genhtml_branch_coverage=1 00:14:39.443 --rc genhtml_function_coverage=1 00:14:39.443 --rc genhtml_legend=1 00:14:39.443 --rc geninfo_all_blocks=1 00:14:39.443 --rc geninfo_unexecuted_blocks=1 00:14:39.443 00:14:39.443 ' 00:14:39.443 19:55:30 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:39.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.443 --rc genhtml_branch_coverage=1 00:14:39.443 --rc genhtml_function_coverage=1 00:14:39.443 --rc genhtml_legend=1 00:14:39.443 --rc geninfo_all_blocks=1 00:14:39.443 --rc geninfo_unexecuted_blocks=1 00:14:39.443 00:14:39.443 ' 00:14:39.443 19:55:30 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:39.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.443 --rc genhtml_branch_coverage=1 00:14:39.443 --rc genhtml_function_coverage=1 00:14:39.443 --rc genhtml_legend=1 00:14:39.443 --rc geninfo_all_blocks=1 00:14:39.443 --rc geninfo_unexecuted_blocks=1 00:14:39.443 00:14:39.443 ' 00:14:39.443 19:55:30 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:14:39.443 19:55:30 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:14:39.443 19:55:30 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:14:39.443 19:55:30 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:14:39.443 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:14:39.443 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:14:39.443 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:14:39.443 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:14:39.443 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:14:39.443 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:14:39.443 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:14:39.443 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:14:39.443 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:14:39.443 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:14:39.444 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:14:39.444 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:14:39.444 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:14:39.444 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:14:39.444 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:14:39.444 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:14:39.444 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:14:39.444 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:14:39.444 19:55:30 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:14:39.444 19:55:30 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:14:39.444 19:55:30 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:14:39.444 19:55:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:14:39.444 19:55:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:14:39.444 19:55:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:14:39.444 19:55:30 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:14:39.444 19:55:30 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:14:39.444 19:55:30 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:14:39.444 19:55:30 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:14:39.444 19:55:30 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:14:39.444 19:55:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:39.444 19:55:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.444 19:55:30 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:14:39.444 19:55:30 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=87045 00:14:39.444 19:55:30 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:14:39.444 19:55:30 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 87045 00:14:39.444 19:55:30 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 87045 ']' 00:14:39.444 19:55:30 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.444 19:55:30 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.444 19:55:30 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.444 19:55:30 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.444 19:55:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.701 [2024-11-26 19:55:30.387122] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:14:39.701 [2024-11-26 19:55:30.387234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87045 ] 00:14:39.701 [2024-11-26 19:55:30.531531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:39.701 [2024-11-26 19:55:30.627424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.701 [2024-11-26 19:55:30.627438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.267 19:55:31 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:40.267 19:55:31 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:14:40.267 19:55:31 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:14:40.267 19:55:31 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:40.267 19:55:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:40.267 19:55:31 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:14:40.267 19:55:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:40.267 19:55:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:40.267 19:55:31 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:14:40.267 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:14:40.267 ' 00:14:42.166 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:14:42.166 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:14:42.166 19:55:32 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:14:42.166 19:55:32 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:42.166 19:55:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:42.166 19:55:32 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:14:42.166 19:55:32 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:42.166 19:55:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:42.166 19:55:32 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:14:42.166 ' 00:14:43.099 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:14:43.099 19:55:33 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:14:43.100 19:55:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:43.100 19:55:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.100 19:55:33 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:14:43.100 19:55:33 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.100 19:55:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.100 19:55:33 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:14:43.100 19:55:33 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:14:43.726 19:55:34 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:14:43.726 19:55:34 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:14:43.726 19:55:34 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:14:43.726 19:55:34 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:43.726 19:55:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.726 19:55:34 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:14:43.726 19:55:34 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.726 19:55:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.726 19:55:34 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:14:43.726 ' 00:14:44.659 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:14:44.659 19:55:35 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:14:44.659 19:55:35 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:44.659 19:55:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:44.659 19:55:35 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:14:44.659 19:55:35 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:44.659 19:55:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:44.659 19:55:35 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:14:44.659 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:14:44.659 ' 00:14:46.032 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:14:46.032 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:14:46.032 19:55:36 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:14:46.032 19:55:36 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:46.032 19:55:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.290 19:55:36 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 87045 00:14:46.290 19:55:36 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 87045 ']' 00:14:46.290 19:55:36 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 87045 00:14:46.290 19:55:36 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:14:46.290 19:55:36 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.290 19:55:36 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87045 00:14:46.290 killing process with pid 87045 00:14:46.290 19:55:36 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.290 19:55:36 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.290 19:55:36 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87045' 00:14:46.290 19:55:36 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 87045 00:14:46.290 19:55:36 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 87045 00:14:47.664 19:55:38 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:14:47.664 19:55:38 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 87045 ']' 00:14:47.664 19:55:38 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 87045 00:14:47.664 19:55:38 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 87045 ']' 00:14:47.664 19:55:38 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 87045 00:14:47.664 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87045) - No such process 00:14:47.664 Process with pid 87045 is not found 00:14:47.664 19:55:38 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 87045 is not found' 00:14:47.664 19:55:38 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:14:47.664 19:55:38 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:14:47.664 19:55:38 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:14:47.664 19:55:38 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:14:47.664 00:14:47.664 real 0m8.099s 00:14:47.664 user 0m16.752s 00:14:47.664 sys 0m0.758s 00:14:47.664 19:55:38 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.664 ************************************ 00:14:47.664 END TEST spdkcli_raid 00:14:47.664 ************************************ 00:14:47.664 19:55:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:14:47.664 19:55:38 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:14:47.664 19:55:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:47.664 19:55:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.664 19:55:38 -- common/autotest_common.sh@10 -- # set +x 00:14:47.664 ************************************ 00:14:47.664 START TEST blockdev_raid5f 00:14:47.664 ************************************ 00:14:47.664 19:55:38 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:14:47.664 * Looking for test storage... 00:14:47.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:47.664 19:55:38 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:47.664 19:55:38 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:47.664 19:55:38 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:14:47.664 19:55:38 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:14:47.664 19:55:38 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:14:47.665 19:55:38 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:14:47.665 19:55:38 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:14:47.665 19:55:38 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.665 19:55:38 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:14:47.665 19:55:38 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:14:47.665 19:55:38 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:47.665 19:55:38 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:47.665 19:55:38 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:14:47.665 19:55:38 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.665 19:55:38 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:47.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.665 --rc genhtml_branch_coverage=1 00:14:47.665 --rc genhtml_function_coverage=1 00:14:47.665 --rc genhtml_legend=1 00:14:47.665 --rc geninfo_all_blocks=1 00:14:47.665 --rc geninfo_unexecuted_blocks=1 00:14:47.665 00:14:47.665 ' 00:14:47.665 19:55:38 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:47.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.665 --rc genhtml_branch_coverage=1 00:14:47.665 --rc genhtml_function_coverage=1 00:14:47.665 --rc genhtml_legend=1 00:14:47.665 --rc geninfo_all_blocks=1 00:14:47.665 --rc geninfo_unexecuted_blocks=1 00:14:47.665 00:14:47.665 ' 00:14:47.665 19:55:38 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:47.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.665 --rc genhtml_branch_coverage=1 00:14:47.665 --rc genhtml_function_coverage=1 00:14:47.665 --rc genhtml_legend=1 00:14:47.665 --rc geninfo_all_blocks=1 00:14:47.665 --rc geninfo_unexecuted_blocks=1 00:14:47.665 00:14:47.665 ' 00:14:47.665 19:55:38 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:47.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.665 --rc genhtml_branch_coverage=1 00:14:47.665 --rc genhtml_function_coverage=1 00:14:47.665 --rc genhtml_legend=1 00:14:47.665 --rc geninfo_all_blocks=1 00:14:47.665 --rc geninfo_unexecuted_blocks=1 00:14:47.665 00:14:47.665 ' 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=87308 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 87308 00:14:47.665 19:55:38 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 87308 ']' 00:14:47.665 19:55:38 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.665 19:55:38 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.665 19:55:38 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.665 19:55:38 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.665 19:55:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:14:47.665 19:55:38 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:47.665 [2024-11-26 19:55:38.509253] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:14:47.665 [2024-11-26 19:55:38.509377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87308 ] 00:14:47.924 [2024-11-26 19:55:38.661259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.924 [2024-11-26 19:55:38.753767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.488 19:55:39 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.488 19:55:39 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:14:48.488 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:48.488 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:14:48.488 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:14:48.488 19:55:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.488 19:55:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:14:48.488 Malloc0 00:14:48.488 Malloc1 00:14:48.488 Malloc2 00:14:48.488 19:55:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.488 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:48.488 19:55:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.488 19:55:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:14:48.488 19:55:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.488 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:14:48.488 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:48.488 19:55:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.488 19:55:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:14:48.488 19:55:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.488 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:48.488 19:55:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.488 19:55:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.746 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.746 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:48.746 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:14:48.746 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.746 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:48.746 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "e578e15f-e871-4d88-9ec5-b2f1e45cec8d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e578e15f-e871-4d88-9ec5-b2f1e45cec8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "e578e15f-e871-4d88-9ec5-b2f1e45cec8d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "6927480e-a5de-43d5-8afb-ebfd89d50cc4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "9b31e989-9cfd-4d8b-ab45-ee3885ec3106",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "3b8de3e1-75b0-47b0-bdff-c80fbb826cc4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:14:48.746 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:48.746 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:48.746 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:14:48.746 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:48.746 19:55:39 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 87308 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 87308 ']' 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 87308 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87308 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.746 killing process with pid 87308 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87308' 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 87308 00:14:48.746 19:55:39 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 87308 00:14:50.120 19:55:40 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:50.120 19:55:40 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:14:50.120 19:55:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:50.120 19:55:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.120 19:55:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:14:50.120 ************************************ 00:14:50.120 START TEST bdev_hello_world 00:14:50.120 ************************************ 00:14:50.120 19:55:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:14:50.120 [2024-11-26 19:55:41.010573] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:14:50.120 [2024-11-26 19:55:41.010675] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87353 ] 00:14:50.378 [2024-11-26 19:55:41.161381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.378 [2024-11-26 19:55:41.257906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.943 [2024-11-26 19:55:41.623209] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:50.943 [2024-11-26 19:55:41.623260] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:14:50.943 [2024-11-26 19:55:41.623275] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:50.943 [2024-11-26 19:55:41.623641] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:50.943 [2024-11-26 19:55:41.623747] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:50.943 [2024-11-26 19:55:41.623761] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:50.943 [2024-11-26 19:55:41.623802] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:50.943 00:14:50.943 [2024-11-26 19:55:41.623814] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:51.509 00:14:51.509 real 0m1.390s 00:14:51.509 user 0m1.082s 00:14:51.509 sys 0m0.191s 00:14:51.509 19:55:42 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.509 ************************************ 00:14:51.509 END TEST bdev_hello_world 00:14:51.509 ************************************ 00:14:51.509 19:55:42 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:51.509 19:55:42 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:51.509 19:55:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:51.509 19:55:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.509 19:55:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:14:51.509 ************************************ 00:14:51.509 START TEST bdev_bounds 00:14:51.509 ************************************ 00:14:51.509 19:55:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:14:51.509 19:55:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=87384 00:14:51.509 19:55:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:51.509 Process bdevio pid: 87384 00:14:51.509 19:55:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 87384' 00:14:51.509 19:55:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 87384 00:14:51.509 19:55:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 87384 ']' 00:14:51.509 19:55:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.509 19:55:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.509 19:55:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.509 19:55:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.509 19:55:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:51.509 19:55:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:51.767 [2024-11-26 19:55:42.458652] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:14:51.767 [2024-11-26 19:55:42.458787] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87384 ] 00:14:51.767 [2024-11-26 19:55:42.614065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:52.025 [2024-11-26 19:55:42.710132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.026 [2024-11-26 19:55:42.710207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.026 [2024-11-26 19:55:42.710233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.592 19:55:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.592 19:55:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:14:52.592 19:55:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:52.592 I/O targets: 00:14:52.592 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:14:52.592 00:14:52.592 00:14:52.592 CUnit - A unit testing framework for C - Version 2.1-3 00:14:52.592 http://cunit.sourceforge.net/ 00:14:52.592 00:14:52.592 00:14:52.592 Suite: bdevio tests on: raid5f 00:14:52.592 Test: blockdev write read block ...passed 00:14:52.592 Test: blockdev write zeroes read block ...passed 00:14:52.592 Test: blockdev write zeroes read no split ...passed 00:14:52.592 Test: blockdev write zeroes read split ...passed 00:14:52.850 Test: blockdev write zeroes read split partial ...passed 00:14:52.850 Test: blockdev reset ...passed 00:14:52.850 Test: blockdev write read 8 blocks ...passed 00:14:52.850 Test: blockdev write read size > 128k ...passed 00:14:52.850 Test: blockdev write read invalid size ...passed 00:14:52.850 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:52.850 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:52.850 Test: blockdev write read max offset ...passed 00:14:52.850 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:52.850 Test: blockdev writev readv 8 blocks ...passed 00:14:52.850 Test: blockdev writev readv 30 x 1block ...passed 00:14:52.850 Test: blockdev writev readv block ...passed 00:14:52.850 Test: blockdev writev readv size > 128k ...passed 00:14:52.850 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:52.850 Test: blockdev comparev and writev ...passed 00:14:52.850 Test: blockdev nvme passthru rw ...passed 00:14:52.850 Test: blockdev nvme passthru vendor specific ...passed 00:14:52.850 Test: blockdev nvme admin passthru ...passed 00:14:52.850 Test: blockdev copy ...passed 00:14:52.850 00:14:52.850 Run Summary: Type Total Ran Passed Failed Inactive 00:14:52.850 suites 1 1 n/a 0 0 00:14:52.850 tests 23 23 23 0 0 00:14:52.850 asserts 130 130 130 0 n/a 00:14:52.850 00:14:52.850 Elapsed time = 0.475 seconds 00:14:52.850 0 00:14:52.850 19:55:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 87384 00:14:52.850 19:55:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 87384 ']' 00:14:52.850 19:55:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 87384 00:14:52.850 19:55:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:14:52.850 19:55:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.850 19:55:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87384 00:14:52.850 19:55:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.850 19:55:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.850 killing process with pid 87384 00:14:52.850 19:55:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87384' 00:14:52.850 19:55:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 87384 00:14:52.850 19:55:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 87384 00:14:53.785 19:55:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:53.785 00:14:53.785 real 0m2.021s 00:14:53.785 user 0m4.992s 00:14:53.785 sys 0m0.311s 00:14:53.785 19:55:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.785 ************************************ 00:14:53.785 END TEST bdev_bounds 00:14:53.785 19:55:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:53.785 ************************************ 00:14:53.785 19:55:44 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:14:53.785 19:55:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:53.786 19:55:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.786 19:55:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:14:53.786 ************************************ 00:14:53.786 START TEST bdev_nbd 00:14:53.786 ************************************ 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=87440 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 87440 /var/tmp/spdk-nbd.sock 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 87440 ']' 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:53.786 19:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:53.786 [2024-11-26 19:55:44.519700] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:14:53.786 [2024-11-26 19:55:44.519800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.786 [2024-11-26 19:55:44.669910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.044 [2024-11-26 19:55:44.768722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:14:54.635 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:14:54.892 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:54.892 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.893 1+0 records in 00:14:54.893 1+0 records out 00:14:54.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204212 s, 20.1 MB/s 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:54.893 { 00:14:54.893 "nbd_device": "/dev/nbd0", 00:14:54.893 "bdev_name": "raid5f" 00:14:54.893 } 00:14:54.893 ]' 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:54.893 { 00:14:54.893 "nbd_device": "/dev/nbd0", 00:14:54.893 "bdev_name": "raid5f" 00:14:54.893 } 00:14:54.893 ]' 00:14:54.893 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:55.151 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:55.151 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:55.151 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:55.151 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:55.151 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:55.151 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.151 19:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:55.151 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:55.151 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:55.151 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:55.151 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.151 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.151 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:55.151 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:55.151 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.151 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:55.151 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:55.151 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:55.408 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:14:55.409 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:55.409 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:14:55.409 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:55.409 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:55.409 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:55.409 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:55.409 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:55.409 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.409 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:14:55.666 /dev/nbd0 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.666 1+0 records in 00:14:55.666 1+0 records out 00:14:55.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284512 s, 14.4 MB/s 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:55.666 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:55.924 { 00:14:55.924 "nbd_device": "/dev/nbd0", 00:14:55.924 "bdev_name": "raid5f" 00:14:55.924 } 00:14:55.924 ]' 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:55.924 { 00:14:55.924 "nbd_device": "/dev/nbd0", 00:14:55.924 "bdev_name": "raid5f" 00:14:55.924 } 00:14:55.924 ]' 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:55.924 256+0 records in 00:14:55.924 256+0 records out 00:14:55.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00727423 s, 144 MB/s 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:55.924 256+0 records in 00:14:55.924 256+0 records out 00:14:55.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223102 s, 47.0 MB/s 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.924 19:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:56.183 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:56.183 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:56.183 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:56.183 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.183 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.183 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:56.183 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:56.183 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.183 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:56.183 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:56.183 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:56.441 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:56.699 malloc_lvol_verify 00:14:56.699 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:56.956 94904884-dedf-477e-b68e-85cbdf0f91d7 00:14:56.956 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:57.214 bc2d9a45-11af-4ff2-941c-62bc511d8d9e 00:14:57.214 19:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:57.214 /dev/nbd0 00:14:57.214 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:57.214 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:57.214 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:57.214 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:57.214 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:57.214 mke2fs 1.47.0 (5-Feb-2023) 00:14:57.215 Discarding device blocks: 0/4096 done 00:14:57.215 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:57.215 00:14:57.215 Allocating group tables: 0/1 done 00:14:57.215 Writing inode tables: 0/1 done 00:14:57.215 Creating journal (1024 blocks): done 00:14:57.215 Writing superblocks and filesystem accounting information: 0/1 done 00:14:57.215 00:14:57.215 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:57.215 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:57.215 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:57.215 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:57.215 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:57.215 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.215 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 87440 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 87440 ']' 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 87440 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87440 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.473 killing process with pid 87440 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87440' 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 87440 00:14:57.473 19:55:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 87440 00:14:58.410 19:55:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:58.410 00:14:58.410 real 0m4.694s 00:14:58.410 user 0m6.790s 00:14:58.410 sys 0m0.982s 00:14:58.410 19:55:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.410 19:55:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:58.410 ************************************ 00:14:58.410 END TEST bdev_nbd 00:14:58.410 ************************************ 00:14:58.410 19:55:49 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:58.410 19:55:49 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:14:58.410 19:55:49 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:14:58.410 19:55:49 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:14:58.410 19:55:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:58.410 19:55:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.410 19:55:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:14:58.410 ************************************ 00:14:58.410 START TEST bdev_fio 00:14:58.410 ************************************ 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:58.411 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:58.411 ************************************ 00:14:58.411 START TEST bdev_fio_rw_verify 00:14:58.411 ************************************ 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:58.411 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:58.412 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:14:58.412 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:58.412 19:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:58.669 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:58.669 fio-3.35 00:14:58.669 Starting 1 thread 00:15:10.865 00:15:10.865 job_raid5f: (groupid=0, jobs=1): err= 0: pid=87631: Tue Nov 26 19:56:00 2024 00:15:10.865 read: IOPS=12.7k, BW=49.8MiB/s (52.2MB/s)(498MiB/10001msec) 00:15:10.865 slat (usec): min=17, max=421, avg=18.95, stdev= 2.45 00:15:10.865 clat (usec): min=9, max=309, avg=128.26, stdev=45.91 00:15:10.865 lat (usec): min=27, max=683, avg=147.20, stdev=46.42 00:15:10.865 clat percentiles (usec): 00:15:10.865 | 50.000th=[ 133], 99.000th=[ 245], 99.900th=[ 255], 99.990th=[ 269], 00:15:10.865 | 99.999th=[ 310] 00:15:10.865 write: IOPS=13.4k, BW=52.3MiB/s (54.8MB/s)(517MiB/9881msec); 0 zone resets 00:15:10.865 slat (usec): min=7, max=219, avg=15.82, stdev= 2.51 00:15:10.865 clat (usec): min=53, max=831, avg=285.74, stdev=41.47 00:15:10.865 lat (usec): min=68, max=1010, avg=301.56, stdev=42.46 00:15:10.865 clat percentiles (usec): 00:15:10.865 | 50.000th=[ 289], 99.000th=[ 408], 99.900th=[ 424], 99.990th=[ 510], 00:15:10.865 | 99.999th=[ 766] 00:15:10.865 bw ( KiB/s): min=42104, max=56864, per=98.56%, avg=52762.26, stdev=4096.44, samples=19 00:15:10.865 iops : min=10526, max=14216, avg=13190.53, stdev=1024.22, samples=19 00:15:10.865 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=16.97%, 250=42.75% 00:15:10.865 lat (usec) : 500=40.27%, 750=0.01%, 1000=0.01% 00:15:10.865 cpu : usr=99.15%, sys=0.25%, ctx=29, majf=0, minf=10417 00:15:10.865 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:10.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.865 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.865 issued rwts: total=127499,132237,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.865 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:10.865 00:15:10.865 Run status group 0 (all jobs): 00:15:10.865 READ: bw=49.8MiB/s (52.2MB/s), 49.8MiB/s-49.8MiB/s (52.2MB/s-52.2MB/s), io=498MiB (522MB), run=10001-10001msec 00:15:10.865 WRITE: bw=52.3MiB/s (54.8MB/s), 52.3MiB/s-52.3MiB/s (54.8MB/s-54.8MB/s), io=517MiB (542MB), run=9881-9881msec 00:15:10.865 ----------------------------------------------------- 00:15:10.865 Suppressions used: 00:15:10.865 count bytes template 00:15:10.865 1 7 /usr/src/fio/parse.c 00:15:10.865 774 74304 /usr/src/fio/iolog.c 00:15:10.865 1 8 libtcmalloc_minimal.so 00:15:10.865 1 904 libcrypto.so 00:15:10.865 ----------------------------------------------------- 00:15:10.865 00:15:10.865 00:15:10.865 real 0m12.090s 00:15:10.865 user 0m12.612s 00:15:10.865 sys 0m0.615s 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:10.865 ************************************ 00:15:10.865 END TEST bdev_fio_rw_verify 00:15:10.865 ************************************ 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:10.865 19:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "e578e15f-e871-4d88-9ec5-b2f1e45cec8d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e578e15f-e871-4d88-9ec5-b2f1e45cec8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "e578e15f-e871-4d88-9ec5-b2f1e45cec8d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "6927480e-a5de-43d5-8afb-ebfd89d50cc4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "9b31e989-9cfd-4d8b-ab45-ee3885ec3106",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "3b8de3e1-75b0-47b0-bdff-c80fbb826cc4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:15:10.866 19:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:10.866 19:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:10.866 19:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:10.866 /home/vagrant/spdk_repo/spdk 00:15:10.866 19:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:10.866 19:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:10.866 00:15:10.866 real 0m12.258s 00:15:10.866 user 0m12.682s 00:15:10.866 sys 0m0.695s 00:15:10.866 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.866 ************************************ 00:15:10.866 END TEST bdev_fio 00:15:10.866 19:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:10.866 ************************************ 00:15:10.866 19:56:01 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:10.866 19:56:01 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:10.866 19:56:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:10.866 19:56:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.866 19:56:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:10.866 ************************************ 00:15:10.866 START TEST bdev_verify 00:15:10.866 ************************************ 00:15:10.866 19:56:01 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:10.866 [2024-11-26 19:56:01.548628] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:15:10.866 [2024-11-26 19:56:01.548752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87794 ] 00:15:10.866 [2024-11-26 19:56:01.709666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:11.123 [2024-11-26 19:56:01.822265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.123 [2024-11-26 19:56:01.822386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.381 Running I/O for 5 seconds... 00:15:13.323 13330.00 IOPS, 52.07 MiB/s [2024-11-26T19:56:05.632Z] 16883.50 IOPS, 65.95 MiB/s [2024-11-26T19:56:06.566Z] 18794.33 IOPS, 73.42 MiB/s [2024-11-26T19:56:07.500Z] 19967.75 IOPS, 78.00 MiB/s [2024-11-26T19:56:07.500Z] 20678.20 IOPS, 80.77 MiB/s 00:15:16.563 Latency(us) 00:15:16.563 [2024-11-26T19:56:07.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.563 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:16.563 Verification LBA range: start 0x0 length 0x2000 00:15:16.563 raid5f : 5.01 10379.94 40.55 0.00 0.00 18364.39 189.83 22786.36 00:15:16.563 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:16.563 Verification LBA range: start 0x2000 length 0x2000 00:15:16.563 raid5f : 5.01 10304.44 40.25 0.00 0.00 18681.58 159.90 22988.01 00:15:16.563 [2024-11-26T19:56:07.500Z] =================================================================================================================== 00:15:16.563 [2024-11-26T19:56:07.500Z] Total : 20684.38 80.80 0.00 0.00 18522.47 159.90 22988.01 00:15:17.128 00:15:17.128 real 0m6.511s 00:15:17.128 user 0m12.107s 00:15:17.128 sys 0m0.242s 00:15:17.128 19:56:07 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.128 19:56:07 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:17.128 ************************************ 00:15:17.128 END TEST bdev_verify 00:15:17.128 ************************************ 00:15:17.128 19:56:08 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:17.128 19:56:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:17.128 19:56:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.128 19:56:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:17.128 ************************************ 00:15:17.128 START TEST bdev_verify_big_io 00:15:17.128 ************************************ 00:15:17.128 19:56:08 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:17.385 [2024-11-26 19:56:08.103366] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:15:17.385 [2024-11-26 19:56:08.103492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87887 ] 00:15:17.385 [2024-11-26 19:56:08.263372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:17.643 [2024-11-26 19:56:08.377287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.643 [2024-11-26 19:56:08.377410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.900 Running I/O for 5 seconds... 00:15:20.258 568.00 IOPS, 35.50 MiB/s [2024-11-26T19:56:12.125Z] 824.50 IOPS, 51.53 MiB/s [2024-11-26T19:56:13.057Z] 973.00 IOPS, 60.81 MiB/s [2024-11-26T19:56:13.989Z] 1046.25 IOPS, 65.39 MiB/s [2024-11-26T19:56:13.989Z] 1091.40 IOPS, 68.21 MiB/s 00:15:23.052 Latency(us) 00:15:23.052 [2024-11-26T19:56:13.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.052 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:23.052 Verification LBA range: start 0x0 length 0x200 00:15:23.052 raid5f : 5.15 529.59 33.10 0.00 0.00 5857339.63 124.46 358129.03 00:15:23.052 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:23.052 Verification LBA range: start 0x200 length 0x200 00:15:23.052 raid5f : 5.15 554.20 34.64 0.00 0.00 5619398.58 118.15 358129.03 00:15:23.052 [2024-11-26T19:56:13.989Z] =================================================================================================================== 00:15:23.052 [2024-11-26T19:56:13.989Z] Total : 1083.79 67.74 0.00 0.00 5735663.78 118.15 358129.03 00:15:23.986 00:15:23.986 real 0m6.688s 00:15:23.986 user 0m12.491s 00:15:23.986 sys 0m0.209s 00:15:23.986 19:56:14 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.986 19:56:14 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.986 ************************************ 00:15:23.986 END TEST bdev_verify_big_io 00:15:23.986 ************************************ 00:15:23.986 19:56:14 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:23.986 19:56:14 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:23.986 19:56:14 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.986 19:56:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:23.986 ************************************ 00:15:23.986 START TEST bdev_write_zeroes 00:15:23.986 ************************************ 00:15:23.986 19:56:14 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:23.986 [2024-11-26 19:56:14.821976] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:15:23.986 [2024-11-26 19:56:14.822076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87974 ] 00:15:24.243 [2024-11-26 19:56:14.967609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.244 [2024-11-26 19:56:15.059054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.501 Running I/O for 1 seconds... 00:15:25.872 29799.00 IOPS, 116.40 MiB/s 00:15:25.872 Latency(us) 00:15:25.872 [2024-11-26T19:56:16.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.872 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:25.872 raid5f : 1.01 29763.01 116.26 0.00 0.00 4287.51 1247.70 5999.06 00:15:25.872 [2024-11-26T19:56:16.809Z] =================================================================================================================== 00:15:25.872 [2024-11-26T19:56:16.809Z] Total : 29763.01 116.26 0.00 0.00 4287.51 1247.70 5999.06 00:15:26.438 00:15:26.438 real 0m2.407s 00:15:26.438 user 0m2.078s 00:15:26.438 sys 0m0.207s 00:15:26.438 19:56:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.438 19:56:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:26.438 ************************************ 00:15:26.438 END TEST bdev_write_zeroes 00:15:26.438 ************************************ 00:15:26.438 19:56:17 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:26.438 19:56:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:26.438 19:56:17 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.438 19:56:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:26.438 ************************************ 00:15:26.438 START TEST bdev_json_nonenclosed 00:15:26.438 ************************************ 00:15:26.438 19:56:17 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:26.438 [2024-11-26 19:56:17.268908] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:15:26.438 [2024-11-26 19:56:17.269002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88022 ] 00:15:26.697 [2024-11-26 19:56:17.419665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.697 [2024-11-26 19:56:17.512907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.697 [2024-11-26 19:56:17.512993] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:26.697 [2024-11-26 19:56:17.513013] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:26.697 [2024-11-26 19:56:17.513021] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:26.955 00:15:26.955 real 0m0.442s 00:15:26.955 user 0m0.247s 00:15:26.955 sys 0m0.092s 00:15:26.955 19:56:17 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.955 19:56:17 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:26.955 ************************************ 00:15:26.955 END TEST bdev_json_nonenclosed 00:15:26.955 ************************************ 00:15:26.955 19:56:17 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:26.955 19:56:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:26.955 19:56:17 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.955 19:56:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:26.955 ************************************ 00:15:26.955 START TEST bdev_json_nonarray 00:15:26.955 ************************************ 00:15:26.955 19:56:17 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:26.955 [2024-11-26 19:56:17.766242] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:15:26.955 [2024-11-26 19:56:17.766755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88042 ] 00:15:27.213 [2024-11-26 19:56:17.926496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.213 [2024-11-26 19:56:18.040913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.213 [2024-11-26 19:56:18.041014] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:27.213 [2024-11-26 19:56:18.041033] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:27.213 [2024-11-26 19:56:18.041048] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:27.472 00:15:27.472 real 0m0.544s 00:15:27.472 user 0m0.340s 00:15:27.472 sys 0m0.099s 00:15:27.472 19:56:18 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.472 19:56:18 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:27.472 ************************************ 00:15:27.472 END TEST bdev_json_nonarray 00:15:27.472 ************************************ 00:15:27.472 19:56:18 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:15:27.472 19:56:18 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:15:27.472 19:56:18 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:15:27.472 19:56:18 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:15:27.472 19:56:18 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:15:27.472 19:56:18 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:27.472 19:56:18 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:27.473 19:56:18 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:15:27.473 19:56:18 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:15:27.473 19:56:18 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:15:27.473 19:56:18 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:15:27.473 00:15:27.473 real 0m39.989s 00:15:27.473 user 0m55.465s 00:15:27.473 sys 0m3.778s 00:15:27.473 19:56:18 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.473 19:56:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:27.473 ************************************ 00:15:27.473 END TEST blockdev_raid5f 00:15:27.473 ************************************ 00:15:27.473 19:56:18 -- spdk/autotest.sh@194 -- # uname -s 00:15:27.473 19:56:18 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:15:27.473 19:56:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:15:27.473 19:56:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:15:27.473 19:56:18 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@260 -- # timing_exit lib 00:15:27.473 19:56:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:27.473 19:56:18 -- common/autotest_common.sh@10 -- # set +x 00:15:27.473 19:56:18 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:15:27.473 19:56:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:15:27.473 19:56:18 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:15:27.473 19:56:18 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:15:27.473 19:56:18 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:15:27.473 19:56:18 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:15:27.473 19:56:18 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:15:27.473 19:56:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:27.473 19:56:18 -- common/autotest_common.sh@10 -- # set +x 00:15:27.473 19:56:18 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:15:27.473 19:56:18 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:15:27.473 19:56:18 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:15:27.473 19:56:18 -- common/autotest_common.sh@10 -- # set +x 00:15:28.847 INFO: APP EXITING 00:15:28.847 INFO: killing all VMs 00:15:28.847 INFO: killing vhost app 00:15:28.847 INFO: EXIT DONE 00:15:28.847 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:28.847 Waiting for block devices as requested 00:15:28.847 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:29.105 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:29.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:29.671 Cleaning 00:15:29.671 Removing: /var/run/dpdk/spdk0/config 00:15:29.671 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:15:29.671 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:15:29.671 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:15:29.671 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:15:29.671 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:15:29.671 Removing: /var/run/dpdk/spdk0/hugepage_info 00:15:29.671 Removing: /dev/shm/spdk_tgt_trace.pid56143 00:15:29.671 Removing: /var/run/dpdk/spdk0 00:15:29.671 Removing: /var/run/dpdk/spdk_pid55941 00:15:29.671 Removing: /var/run/dpdk/spdk_pid56143 00:15:29.671 Removing: /var/run/dpdk/spdk_pid56361 00:15:29.671 Removing: /var/run/dpdk/spdk_pid56454 00:15:29.671 Removing: /var/run/dpdk/spdk_pid56499 00:15:29.671 Removing: /var/run/dpdk/spdk_pid56622 00:15:29.671 Removing: /var/run/dpdk/spdk_pid56634 00:15:29.671 Removing: /var/run/dpdk/spdk_pid56828 00:15:29.671 Removing: /var/run/dpdk/spdk_pid56925 00:15:29.671 Removing: /var/run/dpdk/spdk_pid57019 00:15:29.671 Removing: /var/run/dpdk/spdk_pid57132 00:15:29.671 Removing: /var/run/dpdk/spdk_pid57225 00:15:29.671 Removing: /var/run/dpdk/spdk_pid57264 00:15:29.671 Removing: /var/run/dpdk/spdk_pid57301 00:15:29.671 Removing: /var/run/dpdk/spdk_pid57374 00:15:29.671 Removing: /var/run/dpdk/spdk_pid57461 00:15:29.671 Removing: /var/run/dpdk/spdk_pid57897 00:15:29.671 Removing: /var/run/dpdk/spdk_pid57961 00:15:29.671 Removing: /var/run/dpdk/spdk_pid58024 00:15:29.671 Removing: /var/run/dpdk/spdk_pid58040 00:15:29.671 Removing: /var/run/dpdk/spdk_pid58131 00:15:29.671 Removing: /var/run/dpdk/spdk_pid58147 00:15:29.671 Removing: /var/run/dpdk/spdk_pid58249 00:15:29.671 Removing: /var/run/dpdk/spdk_pid58265 00:15:29.671 Removing: /var/run/dpdk/spdk_pid58318 00:15:29.671 Removing: /var/run/dpdk/spdk_pid58336 00:15:29.671 Removing: /var/run/dpdk/spdk_pid58389 00:15:29.671 Removing: /var/run/dpdk/spdk_pid58407 00:15:29.671 Removing: /var/run/dpdk/spdk_pid58567 00:15:29.671 Removing: /var/run/dpdk/spdk_pid58604 00:15:29.671 Removing: /var/run/dpdk/spdk_pid58687 00:15:29.671 Removing: /var/run/dpdk/spdk_pid59931 00:15:29.671 Removing: /var/run/dpdk/spdk_pid60137 00:15:29.671 Removing: /var/run/dpdk/spdk_pid60267 00:15:29.671 Removing: /var/run/dpdk/spdk_pid60879 00:15:29.671 Removing: /var/run/dpdk/spdk_pid61074 00:15:29.671 Removing: /var/run/dpdk/spdk_pid61209 00:15:29.671 Removing: /var/run/dpdk/spdk_pid61819 00:15:29.671 Removing: /var/run/dpdk/spdk_pid62129 00:15:29.671 Removing: /var/run/dpdk/spdk_pid62269 00:15:29.671 Removing: /var/run/dpdk/spdk_pid63588 00:15:29.671 Removing: /var/run/dpdk/spdk_pid63830 00:15:29.671 Removing: /var/run/dpdk/spdk_pid63959 00:15:29.671 Removing: /var/run/dpdk/spdk_pid65289 00:15:29.671 Removing: /var/run/dpdk/spdk_pid65531 00:15:29.671 Removing: /var/run/dpdk/spdk_pid65664 00:15:29.671 Removing: /var/run/dpdk/spdk_pid66984 00:15:29.671 Removing: /var/run/dpdk/spdk_pid67408 00:15:29.671 Removing: /var/run/dpdk/spdk_pid67542 00:15:29.671 Removing: /var/run/dpdk/spdk_pid68950 00:15:29.671 Removing: /var/run/dpdk/spdk_pid69198 00:15:29.671 Removing: /var/run/dpdk/spdk_pid69333 00:15:29.671 Removing: /var/run/dpdk/spdk_pid70751 00:15:29.671 Removing: /var/run/dpdk/spdk_pid71000 00:15:29.671 Removing: /var/run/dpdk/spdk_pid71134 00:15:29.671 Removing: /var/run/dpdk/spdk_pid72541 00:15:29.671 Removing: /var/run/dpdk/spdk_pid73002 00:15:29.671 Removing: /var/run/dpdk/spdk_pid73142 00:15:29.671 Removing: /var/run/dpdk/spdk_pid73269 00:15:29.671 Removing: /var/run/dpdk/spdk_pid73675 00:15:29.671 Removing: /var/run/dpdk/spdk_pid74376 00:15:29.671 Removing: /var/run/dpdk/spdk_pid74754 00:15:29.671 Removing: /var/run/dpdk/spdk_pid75415 00:15:29.671 Removing: /var/run/dpdk/spdk_pid75846 00:15:29.671 Removing: /var/run/dpdk/spdk_pid76579 00:15:29.671 Removing: /var/run/dpdk/spdk_pid76966 00:15:29.671 Removing: /var/run/dpdk/spdk_pid78836 00:15:29.671 Removing: /var/run/dpdk/spdk_pid79259 00:15:29.671 Removing: /var/run/dpdk/spdk_pid79678 00:15:29.671 Removing: /var/run/dpdk/spdk_pid81668 00:15:29.671 Removing: /var/run/dpdk/spdk_pid82126 00:15:29.671 Removing: /var/run/dpdk/spdk_pid82631 00:15:29.671 Removing: /var/run/dpdk/spdk_pid83669 00:15:29.671 Removing: /var/run/dpdk/spdk_pid83975 00:15:29.671 Removing: /var/run/dpdk/spdk_pid84874 00:15:29.671 Removing: /var/run/dpdk/spdk_pid85181 00:15:29.671 Removing: /var/run/dpdk/spdk_pid86079 00:15:29.929 Removing: /var/run/dpdk/spdk_pid86391 00:15:29.929 Removing: /var/run/dpdk/spdk_pid87045 00:15:29.929 Removing: /var/run/dpdk/spdk_pid87308 00:15:29.929 Removing: /var/run/dpdk/spdk_pid87353 00:15:29.929 Removing: /var/run/dpdk/spdk_pid87384 00:15:29.929 Removing: /var/run/dpdk/spdk_pid87621 00:15:29.929 Removing: /var/run/dpdk/spdk_pid87794 00:15:29.929 Removing: /var/run/dpdk/spdk_pid87887 00:15:29.929 Removing: /var/run/dpdk/spdk_pid87974 00:15:29.929 Removing: /var/run/dpdk/spdk_pid88022 00:15:29.929 Removing: /var/run/dpdk/spdk_pid88042 00:15:29.929 Clean 00:15:29.929 19:56:20 -- common/autotest_common.sh@1453 -- # return 0 00:15:29.929 19:56:20 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:15:29.929 19:56:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:29.929 19:56:20 -- common/autotest_common.sh@10 -- # set +x 00:15:29.929 19:56:20 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:15:29.929 19:56:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:29.929 19:56:20 -- common/autotest_common.sh@10 -- # set +x 00:15:29.929 19:56:20 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:15:29.929 19:56:20 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:15:29.929 19:56:20 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:15:29.929 19:56:20 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:15:29.929 19:56:20 -- spdk/autotest.sh@398 -- # hostname 00:15:29.929 19:56:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:15:30.187 geninfo: WARNING: invalid characters removed from testname! 00:15:52.107 19:56:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:15:54.637 19:56:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:15:57.263 19:56:47 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:15:58.635 19:56:49 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:00.529 19:56:51 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:02.424 19:56:52 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:04.317 19:56:54 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:16:04.317 19:56:54 -- spdk/autorun.sh@1 -- $ timing_finish 00:16:04.317 19:56:54 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:16:04.317 19:56:54 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:16:04.317 19:56:54 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:16:04.317 19:56:54 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:16:04.317 + [[ -n 4985 ]] 00:16:04.317 + sudo kill 4985 00:16:04.324 [Pipeline] } 00:16:04.341 [Pipeline] // timeout 00:16:04.344 [Pipeline] } 00:16:04.352 [Pipeline] // stage 00:16:04.356 [Pipeline] } 00:16:04.365 [Pipeline] // catchError 00:16:04.372 [Pipeline] stage 00:16:04.373 [Pipeline] { (Stop VM) 00:16:04.380 [Pipeline] sh 00:16:04.653 + vagrant halt 00:16:07.180 ==> default: Halting domain... 00:16:11.372 [Pipeline] sh 00:16:11.648 + vagrant destroy -f 00:16:14.172 ==> default: Removing domain... 00:16:14.184 [Pipeline] sh 00:16:14.461 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:16:14.469 [Pipeline] } 00:16:14.485 [Pipeline] // stage 00:16:14.490 [Pipeline] } 00:16:14.503 [Pipeline] // dir 00:16:14.509 [Pipeline] } 00:16:14.523 [Pipeline] // wrap 00:16:14.528 [Pipeline] } 00:16:14.543 [Pipeline] // catchError 00:16:14.554 [Pipeline] stage 00:16:14.557 [Pipeline] { (Epilogue) 00:16:14.569 [Pipeline] sh 00:16:14.848 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:16:20.177 [Pipeline] catchError 00:16:20.179 [Pipeline] { 00:16:20.193 [Pipeline] sh 00:16:20.473 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:16:20.473 Artifacts sizes are good 00:16:20.481 [Pipeline] } 00:16:20.496 [Pipeline] // catchError 00:16:20.508 [Pipeline] archiveArtifacts 00:16:20.516 Archiving artifacts 00:16:20.628 [Pipeline] cleanWs 00:16:20.640 [WS-CLEANUP] Deleting project workspace... 00:16:20.640 [WS-CLEANUP] Deferred wipeout is used... 00:16:20.646 [WS-CLEANUP] done 00:16:20.648 [Pipeline] } 00:16:20.663 [Pipeline] // stage 00:16:20.669 [Pipeline] } 00:16:20.683 [Pipeline] // node 00:16:20.688 [Pipeline] End of Pipeline 00:16:20.734 Finished: SUCCESS